Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Disk Drive Failures 15 Times What Vendors Say 284

jcatcw writes "A Carnegie Mellon University study indicates that customers are replacing disk drives more frequently than vendor estimates of mean time to failure (MTTF) would require.. The study examined large production systems, including high-performance computing sites and Internet services sites running SCSI, FC and SATA drives. The data sheets for the drives indicated MTTF between 1 and 1.5 million hours. That should mean annual failure rates of 0.88%, annual replacement rates were between 2% and 4%. The study also shows no evidence that Fibre Channel drives are any more reliable than SATA drives."
This discussion has been archived. No new comments can be posted.

Disk Drive Failures 15 Times What Vendors Say

Comments Filter:
  • Repeat? (Score:2, Insightful)

    by Corith ( 19511 )
    Didn't we already see this evidence with Google's report?
    • Re:Repeat? (Score:4, Informative)

      by georgewilliamherbert ( 211790 ) on Friday March 02, 2007 @04:20PM (#18211790)
      We did both this study and the Google study in the first couple of days after FAST was over. Completely redundant....
    • Re: (Score:2, Interesting)

      Yes, and its mentioned in the report.
      The best part about the entire thing is the very last quote:

      "If they told me it was 100,000 hours, I'd still protect it the same way. If they told me if was 5 million hours I'd still protect it the same way. I have to assume every drive could fail."

      Just common sense.
      • Re:Repeat? (Score:5, Informative)

        by ajs ( 35943 ) <[ajs] [at] [ajs.com]> on Friday March 02, 2007 @04:34PM (#18211992) Homepage Journal

        The best part about the entire thing is the very last quote:

        "If they told me it was 100,000 hours, I'd still protect it the same way. If they told me if was 5 million hours I'd still protect it the same way. I have to assume every drive could fail."

        Just common sense.
        It's "common sense," but not as useful as one might hope. What MTTF tells you is, within some expected margin of error, how much failure you should plan on in a statistically significant farm. So, for example, I know of an installation that has thousands of disks used for everything from root disks on relatively drop-in-replaceable compute servers to storage arrays. On the budgetary side, that installation wants to know how much replacement cost to expect per annum. On the admin side, that installation wants to be prepared with an appropriate number of redundant systems, and wants to be able to assert a failure probability for key systems. That is, if you have a raid array with 5 disks and one spare, then you want to know the probability that three disks will fail on it in the, let's say, 6 hour worst-case window before you can replace any of them. That probability is non-zero, and must be accounted for in your computation of anticipated downtime, along with every other unlikely, but possible event that you can account for.

        When a vendor tells you to expect 1 0.2% failure rate, but it's really 2-4% that's a HUGE shift in the impact to your organization.

        When you just have one or a handful of disks in your server at home, that's a very different situation from a datacenter full of systems with all kinds of disk needs.
        • just assume 3 years (Score:5, Informative)

          by crabpeople ( 720852 ) on Friday March 02, 2007 @05:05PM (#18212416) Journal
          A good rule of thumb is 3 years. Most hard drives fail in 3 years. I dont know why, but im currently seeing alot of bad 2004 branded drives and consider that right on schedule. Last year the 02-03 drives were the ones failing left and right. I just pulled one this morning thats stamped march 04. Just started acting up a few days ago. Like clockwork.

          • by misleb ( 129952 )
            It is pretty amazing how that works out. Apple recalled a large subset of G4 eMacs because of that leaky capacitor issue in the power supplies. And after a few years of service, a bunch started failing within a window of a couple months. They got repaired for free, of course. But it was fairly chaotic having so many machine machines out for service at a time.

            Then again, considering the assembly-line efficiency and relative consistency with which devices and conponents are made these day, maybe it isn't isn'
        • Re:Repeat? (Score:4, Informative)

          by ShakaUVM ( 157947 ) on Friday March 02, 2007 @08:30PM (#18214254) Homepage Journal
          Except MTBF is just pulled out of their asses. Look at the development cycle of a hard drive. Look at the MTBF. I used to work for an engineering company, and have worked doing test suites to determine MTBF. Sure, there's numbers involved, but it's probably 60% wishful thinking and 40% science.

          Believe me, they aren't determining an 11 year MTBF empirically.
      • But what kind of money would you budget for replacing/fixing drive failure in each case? That's the rub.
  • by User 956 ( 568564 ) on Friday March 02, 2007 @04:17PM (#18211744) Homepage
    The data sheets for the drives indicated MTTF between 1 and 1.5 million hours.

    Yeah, but I bet they didn't say what planet those hours are on.
    • Re: (Score:3, Funny)

      Or what percentage of the speed of light they were traveling.
    • by astrashe ( 7452 )
      If an observer on a rail platform measures the MTF of a hard disk on a rail car moving at speeds close to the speed of light...

    • Yes, I am SHOCKED that companies have implemented a systematic program of distorting the truth in order to increase profits.

      I propose a new term for the heinous practice---"marketing".
      • by Beardo the Bearded ( 321478 ) on Friday March 02, 2007 @04:43PM (#18212126)
        What, really?

        The same companies that lie about the capacity on EVERY SINGLE DRIVE they make? You don't think that they're a bunch of lying fucking weasels? (We're both using sarcasm here.)

        I don't care how you spin it. 1024 is the multiple. NOT 1000!

        Failure doesn't get fixed because making a drive more reliable means it costs more. If it costs more, it's not going to get purchased.

        • Re: (Score:3, Informative)

          by Lord Ender ( 156273 )
          Before computers were used in real engineering, we could get away with "k" sometimes meaning 1024 (like in memory addresses) and sometimes meaning 1000 (like in network speeds). Those days are past. Now that computers are part of real engineering work, even the slightest amount of ambiguity is not acceptable .

          Differentiating between "k" (=1000) and "ki" (=1024) is a sign that the computer industry is finally maturing. It's called progress.

        • by Intron ( 870560 )
          And those lying road signs, too. Everyone knows there should be 1024 meters in a kilometer!
        • Re: (Score:3, Informative)

          by Chonine ( 840828 )
          Standard metric is indeed powers of 10, and a megabyte is indeed 10^6 bytes.

          To clear up the confusion, the notation for binary, as in 2^20 bytes was developed. That would be a Mebibyte.

          http://en.wikipedia.org/wiki/Mebibyte [wikipedia.org]

        • by binarybum ( 468664 ) on Friday March 02, 2007 @06:46PM (#18213536) Homepage
          yeah, I used to think they were dirty bastards, but they just work on a different scale than the rest of us.
              The trick is to purchase your HD in pennies.

            "100,000 pennies! why that's 1024 dollars!!"
    • I feel sorry for anyone buying drives on the low end of that range. A MTTF of 1 hour really sucks.
      • I feel sorry for anyone buying drives on the low end of that range. A MTTF of 1 hour really sucks.

        Well, they don't call it "Best Borrow" for no reason.
    • How does it compare to flash MTBF. Or between Manufacturers? If the ratio of actual to stated MTBF is the same for all hard disks that's fine I guess since I know how to divide by 15. But if it varies between manufaruters or between alternative technologies (dvd, harddrive, flash drive, metal film drive, tape) then this matters a great deal as one will make the wrong choices or pay way too much for reliability not gained.

      unless they warantee this, which none do, the spec is meaningless, and they might

  • In other news... (Score:5, Informative)

    by Mr. Underbridge ( 666784 ) on Friday March 02, 2007 @04:22PM (#18211808)
    ...Carnegie Mellon researchers can't tell a mean from a median. This is inherently a long-tailed distribution in which the mean will be much higher than the median. Imagine a simple situation in which failure rates are 50%/yr, but those that last beyond a year last a long time. Mean time to failure might be 1000 years. You simply can't compare the statistics the way they have without knowing a lot more about the distribution than I saw in the article. Perhaps I missed it while skimming.
  • I believe it... (Score:3, Informative)

    by madhatter256 ( 443326 ) on Friday March 02, 2007 @04:23PM (#18211832)
    Yeh. Don't rely on the HDD after it surpasses its' manufacturer warranty.
    • Re: (Score:2, Insightful)

      by SighKoPath ( 956085 )
      Also, don't rely on the HDD before it surpasses its manufacturer warranty. All the warranty means is you get a replacement if it breaks - it doesn't provide any extra guarantees of the disk not failing.
    • Don't rely on a HDD ever. This is why we have backups and RAID. Even RAID's not enough by itself.
    • Sigh.

      As Schwartz [sun.com] put it recently, there are two kinds of disk: Those that have failed, and those that are going to.
    • Hell, nowadays I wouldn't rely on one single drive before it reaches warranty. Usually by the time of the smaller warranty's (1yr) you've accumulated enough important stuff to make the data-loss much more painful than the cost of the replacement drive.

      Now in some cases manufacturers with longer warranties are stating that they have more faith in their product, and certainly the sudden drop in warranty length (from 2-3 years down to one for many) indicates a lack of faith in their products.

      Basically, a w
  • Fuzzy math (Score:2, Insightful)

    by Spazmania ( 174582 )
    Disk Drive Failures 15 Times What Vendors Say [...] That should mean annual failure rates of 0.88% [but] annual replacement rates were between 2% and 4%.

    0.88 * 15 = 4?
  • by Lendrick ( 314723 ) on Friday March 02, 2007 @04:26PM (#18211880) Homepage Journal
    In the article, they mention that the study didn't track actual failures, just the how often customers *thought* there was a failure and replaced their drive. There are all sorts of reasons someone might think a drive has failed. They're not all correct. I can't begin to guess what percentage of those perceived failures were for real.

    This study is not news. All it says is that people *think* their hard drives fail more often than the mean time to failure.
    • And I think they fail less often than the MTTF. There, the statistics are satisfied as well, and it's still not news.
    • by crabpeople ( 720852 ) on Friday March 02, 2007 @05:12PM (#18212520) Journal
      Thats fair, but if you pull a bad drive, ghost it (assuming its not THAT bad), plop the new drive in, and the system works flawlessly, what are you to assume?

      I dont really care to know exactly what is wrong with the drive. If i replace it, and the problem goes away, I would consdier that a bad drive. Even if you could still read and write to it. I just did one this morning that showed no symptoms other than windows taking what I considered a long time, to boot. All the user complained about was sluggish performance, and there were no errors or drive noises to speak of. Problem fixed, user happy, drive bad.

      As I already posted, a good rule of thumb is 3 years from the date of manufacture, is when most drives go bad.

      • You obviously know what you're doing. Not all users do... in fact, the bitter techie in my is screaming that most don't. :)
  • by neiko ( 846668 ) on Friday March 02, 2007 @04:30PM (#18211936)
    TFA seems surprised by SATA drives lasting as long as Fibre...why one earth would your data interface have any consequences on the drive internals? Or are we talking assuming Interface = Data Throughput?
    • Re: (Score:3, Insightful)

      by ender- ( 42944 )
      TFA seems surprised by SATA drives lasting as long as Fibre...why one earth would your data interface have any consequences on the drive internals? Or are we talking assuming Interface = Data Throughput?

      That statement is based on the long-held assumption that hard drive manufacturers put better materials and engineering into enterprise-targeted drives [Fibre] than they put into consumer-level drives [SATA].

      Guess not...

      • Re: (Score:3, Informative)

        by Spazmania ( 174582 )
        They certainly charge enough more. SATA drives run about $0.50 per gig. Comparable Fibre Channel drives run about $3 per gig. A sensible person would expect the Fibre Channel drive to be as much as 6 times as reliable, but per the article there is no difference.
    • by Danga ( 307709 )
      I thought the exact same thing. They are just dumbasses. The interface has probably zero effect on failure rate compared to the mechanical parts which are just about the same in all the drives.

      FTA:

      "the things that can go wrong with a drive are mechanical -- moving parts, motors, spindles, read-write heads," and these components are usually the same"

      The only effect I can see it having would be if really shitty parts were used for one interface compared to the other.
      • by Intron ( 870560 )
        An alternative and simpler explanation is that the manufacturers are correctly specifying MTBF when drives are properly mounted and cooled. When used in the substandard conditions actually experienced, then overheating and lousy shock and vibration characteristics cause any drive to fail much sooner.
    • by mollymoo ( 202721 ) on Friday March 02, 2007 @04:42PM (#18212110) Journal

      TFA seems surprised by SATA drives lasting as long as Fibre...why one earth would your data interface have any consequences on the drive internals?

      Fibre Channel drives, like SCSI drives, are assumed to be "enterprise" drives and therefore better built than "consumer" SATA and PATA drives. It's nothing inherent to the interface, but a consequence of the environment in which that interface is expected to be used. At least, that's the idea.

    • TFA seems surprised by SATA drives lasting as long as Fibre...why one earth would your data interface have any consequences on the drive internals?

      Because drive manufacturers claim [usenix.org] they use different hardware for the drive based on the interface. For example, a SCSI drive supposedly contains a disk designed for heavier use than an ATA drive, they aren't just the same disk with different interfaces.
  • by Danga ( 307709 ) on Friday March 02, 2007 @04:30PM (#18211940)
    I have had 3 personal use hard drives go bad in the last 5 years, they were either Maxtor or Wester Digital. I am not hard on the drives other than leaving them on 24/7. The drives that failed were all just for data backup and I put them in big, well ventilated boxes. With this use I would think the drives would last for years (at least 5 years), but nope! The drives did not arrive broken either, they all functioned great for 1-2 years before dying. The quality of consumer hard drives nowadays is way, WAY low, and the manufacturers should do something about it.

    I don't consider myself a fluke because I know quite a few other people who have had similar problems. What's the deal?

    Also, does anyone else find this quote interesting?:

    "and may have failed for any reason, such as a harsh environment at the customer site and intensive, random read/write operations that cause premature wear to the mechanical components in the drive."

    It's a f$#*ing hard drive! Jesus H Tapdancing Christ how can they call that premature wear, do they calculate the MTTF by just letting the drive sit idle and never reading and writing to it? That actually wouldn't suprise me.
    • I have had 3 personal use hard drives go bad in the last 5 years, they were either Maxtor or Wester Digital. I am not hard on the drives other than leaving them on 24/7.
      Ever read the manufacturer's fine print on how they determine MTBF? Last time I did (yeah, it was over a year ago,) it read: "8 hour a day usage." Drives that are on 24/7 get HOT, and heat leads to mechanical failure.
      • Ever read the manufacturer's fine print on how they determine MTBF? Last time I did (yeah, it was over a year ago,) it read: "8 hour a day usage." Drives that are on 24/7 get HOT, and heat leads to mechanical failure.

        MTTF, no? MTBF would indicate a fixable system.

        Yeah, but there has to be a plateau to the heat curve at some point. It's not as if the heat just keeps going up and up.. I would think that the constant on/off each day, causing expansion and contraction of the parts as they heat and cool, woul
      • I was able to quickly find at least one reference [seagate.com] to this measure (8 hours/300 days a year for personal storage [PS] drives, 24 hours 265 for enterprise storage [ES] drives.)

        The most significant difference in the reliability specifica- tion of PS and ES drives is the expected power-on hours (POH) for each drive type. The MTBF calculation for PS assumes a POH of 8 hours/day for 300 days/year1 while the ES specification assumes 24 hours per day, 365 days per year.

      • Do you seriously think a drive won't have reached thermal equilibrium after an hour, let alone after several hours? Mine seem to get up to their 'normal' temperatures in 30 minutes or less. And according to the Google study, heat doesn't lead to a significantly increased risk of failure till you get above 45 C or so.
        • Do you seriously think a drive won't have reached thermal equilibrium after an hour, let alone after several hours? Mine seem to get up to their 'normal' temperatures in 30 minutes or less.

          Sure, they will have reached "thermal equilibrium" after a short period of time. See Figure 9 in this paper [seagate.com] " Reliability reduction with increased power on hours, ranging from a few hours per day to 24 x 7 operation " to see how I'm not sure that merely being hot is the problem.

          And according to the Google study

          • Sure, they will have reached "thermal equilibrium" after a short period of time. See Figure 9 in this paper " Reliability reduction with increased power on hours, ranging from a few hours per day to 24 x 7 operation " to see how I'm not sure that merely being hot is the problem.

            The graph mostly seems to indicate that drives wear out when they are spinning. It's not all that far from a straight line (if you ignore the very low hours), which you would expect if wear was a significant component in the risk

            • To a rough approximation, that graph shows a 0.5% risk of failure independent of usage level, then an additional 0.5% risk per 3000 hours/year of usage.

              Perhaps you misinterpreted the label on the y-axis of that figure. It is not in percent, it is a multiplier. So 0.5 means 50%.

              Quoting the paper, emphasis mine:

              The chart in Figure 9 shows the expected increase in AFR due to higher power-on-hours. Moving a drive from an expected 2,400 POH per year to 8,760 POH per year would increase the failure rate al

              • Perhaps you misinterpreted the label on the y-axis of that figure. It is not in percent, it is a multiplier. So 0.5 means 50%.

                Oops, indeed I did. It only scales my interpretation, rather than contradicting it though. It still indicates that wear is highly significant (which I expected, but previously erroneously asssumed was accounted for by MTBFs applying to power-on time rather than calendar time).

    • I don't think drive reliability is that bad. I'm using more drives now (five in each of two computers, then there are external drives) than I ever have been (often just one or two drives per computer) and I am getting fewer failures than I did a decade ago. I had one drive fail a month ago, and two fail about a decade ago. I've got many drives that work but aren't worth connecting, my first drive probably would still work, but 40MB isn't worth it except for the nostalgia - to see what I ran ~15 years ag
    • The quality of consumer hard drives nowadays is way, WAY low, and the manufacturers should do something about it.

      Point, counterpoint...

      I've never had a single one of my own hard drives fail. Not a single one, ever. I've had a dozen or so that I can remember, from the 20MiB drive in my Amiga to the 250GiB that now hangs off my NSLU2. They are all either still functioning or became obsolete before failing. Many of them have been run 24/7 for significant chunks of their lives and I don't replace them unl

  • I am shocked! (Score:2, Insightful)

    by Anonymous Coward
    I just can't believe that the same vendors that would misrepresent the capacity of their disk by redefining a Gigabyte as 1,000,000,000 bytes instead of 1,073,741,824 bytes would misrepresent their MTBF too! And by the way, nobody actually runs a statistically significant sample set their equipment for 10,000 hours to arrive at a MTBF of 10,000 hours, so isn't their methodology a little suspect in the first place?
    • Off-Topic: SI Units (Score:5, Informative)

      by ewhac ( 5844 ) on Friday March 02, 2007 @05:21PM (#18212634) Homepage Journal

      I just can't believe that the same vendors that would misrepresent the capacity of their disk by redefining a Gigabyte as 1,000,000,000 bytes instead of 1,073,741,824 bytes would misrepresent their MTBF too!

      Not that this is actually relevant or anything, but there's been a long-standing schism between the computing community and the scientific community concerning the meaning of the SI prefixes Kilo, Mega, and Giga. Until computers showed up, Kilo, Mega, and Giga referred exclusively to multipliers of exactly 1,000, 1,000,000, and 1,000,000,000, respectively. Then, when computers showed up and people had to start speaking of large storage sizes, the computing guys overloaded the prefixes to mean powers of two which were "close enough." Thus, when one speaks of computer storage, Kilo, Mega, and Giga refer to 2**10, 2**20, and 2**30 bytes, respectively. Kilo, Mega, and Giga, when used in this way, are properly slang, but they've gained traction in the mainstream, causing confusion among members of differing disciplines.

      As such, there has been a decree [nist.gov] to give the powers of two their own SI prefix names. The following have been established:

      • 2**10: Kibi (abbreviated Ki)
      • 2**20: Mebi (Mi)
      • 2**30: Gibi (Gi)

      These new prefixes are gaining traction in some circles. If you have a recent release of Linux handy, type /sbin/ifconfig and look at the RX and TX byte counts. It uses the new prefixes.

      Schwab

  • Here are the main conclusions:
    • the MTTF is always much lower than the observed time to disk replacement
    • SATA is not necessarily less reliable than FC and SCSI disks
    • contrary to popular belief, hard drive replacement rates to not enter steady state after the first year of operation, and in fact steadily increase over time.
    • early onset of wear-out has a stronger impact on replacement than infant mortality.
    • they show that the common assumptions that the time between failure follows an exponential distributio
  • Is there anyone out there that actually believed the published MTBF figures, even BEFORE these articles came out?

    It's hard to take someone seriously when they claim that their drives have a 100+ year MTBF, especially since precious few are still functional after 1/10th of that much use. To make it better, many drives are NOT rated for continuous use, but only a certain number of hours per day. I didn't know that anyone EVER believed the MTBF B.S..

    • It's hard to take someone seriously when they claim that their drives have a 100+ year MTBF, especially since precious few are still functional after 1/10th of that much use.

      You're misinterpreting MTBF. A 100 year MTBF does not mean the drive will last 100 years, it means that 1/100 drives will fail each year. There will be another spec somewhere which specifies the design lifetime. For the Fujitsu MHT2060AT [fujitsu.com]drive which was in my laptop the MTBF is 300 000 hours, but the component life is a crappy 20 000

  • Check SMART Info (Score:4, Interesting)

    by Bill Dimm ( 463823 ) on Friday March 02, 2007 @04:41PM (#18212094) Homepage
    Slightly off-topic, but if you haven't checked the Self-Monitoring, Analysis and Reporting Technology (SMART) info provided by your drive to see if it is having errors, you probably should. You can download smartmontools [sourceforge.net], which works on Linux/Unix and Windows. Your Linux distro may have it included, but may not have the daemon running to automatically monitor the drive (smartd).

    To view the SMART info for drive /dev/sda do:
    smartctl -a /dev/sda
    To do a full disk read check (can take hours) do:
    smartctl -t long /dev/sda

    Sadly, I just found read errors on a 375-hour-old drive (manufacturer's software claimed that repair succeeded). Fortunately, they were on the Windows partition :-)
    • Slightly off-topic, but if you haven't checked the Self-Monitoring, Analysis and Reporting Technology (SMART) info provided by your drive to see if it is having errors, you probably should.

      The last survey that popped up here said that if SMART says your drive will fail, it probably will, but if SMART doesn't say it will fail, it doesn't mean much.

      Suffice to say that you should never trust any piece of hardware that thinks it's SMARTer than you are.

      • The last survey that popped up here said that if SMART says your drive will fail, it probably will, but if SMART doesn't say it will fail, it doesn't mean much.

        Yes, that was the Google study [slashdot.org]. So, if SMART says there is a problem, you should pay attention to it. If SMART doesn't find a problem, that doesn't mean you are out of the woods.
    • by sparkz ( 146432 )
      Good point; I just downloaded it. It just stores the 5 most recent errors:

      hda has had 356 errors in its short life (I've had it about a year; 200Gb Seagate IDE)
      hdc has had 4,560 errors its life (after nearly 3 years of service; 80Gb Maxtor IDE)

      That does't sound good to me.

      I got the Seagate because my previous drive had failed fsck a few times and had some dodgy-looking data on it.

      These figures suggest about 1 error/day for the Seagate, and 4 errors/day for the Maxtor.

      I don't li
    • Re: (Score:3, Informative)

      by Chalex ( 71702 )
      Slightly off-topic, but if you haven't checked the Google paper on Self-Monitoring, Analysis and Reporting Technology (SMART) info provided by your drive to see if it is having errors, you probably should. The paper is available here: http://hardware.slashdot.org/hardware/07/02/18/04 2 0247.shtml [slashdot.org]

      The conclusions are roughly the following: a) if there are SMART errors, the disk will fail soon, b) if there are no SMART errors, the disk is still likely to fail. They saw no SMART errors on 36% of their failed d
  • New meaning for RAID: Redundant Articles of Identical Discourse.
    Slashdot has a high rate of RAID, which is a bad thing. Which is a bad thing. It has been a whole 9 days. Slashdot needs a story moderation system so dupe articles can get modded out of existance. Ditto for slashdot editors who do the duping! :) (I have long since disabled tagging since 99% of the tags were completely worthless: "yes", "no", "maybe", "fud", etc. If tagging is actually useful now, please let me know!)

    Can we get redundant posting on the story about google's paper [slashdot.org]?
    • They aren't useful yet. Given the crowd, won't be until they're rethought.
    • Slashdot does have story moderation system now. It is called firehose - you can find a link in the menu at the top of the screen. It allows you to give thumbs up or thumbs down to a story as well as marking a story with feedback such as dupe or typo, in addition to the normal tagging system.

      I both gave this story a thumbs down and dupe feedback, however, so many other people moderated the story up that it was at the highest (visible) ranking by the time it got posted. Apparently a bunch of people missed the
  • Unfortunately the data was skewed by one large web site that reported it's results multiple times.
  • One of the things that bugged me last time this report was on /. was that 2 of the three sources reported that memory was replaced after 20% or more of their system failures. That seems pretty odd because in my experience memory hardly ever just goes bad. Sure sometimes it's bad right out of the box which is why I test every module that I buy but once it's installed and test memory tends to keep working just about forever. If that number is off then I wonder how seriously I should take their other number
    • Re: (Score:3, Interesting)

      by Akaihiryuu ( 786040 )
      I had a 4mb 72-pin parity SIMM go bad one time...this was about 12 years ago in a 486 I used to have. It just didn't work one day (it worked for the first two months). Turn the computer on, get past BIOS start, bam...parity error before bootloader could even start. Reboot, try again, parity error. Turn off parity checking, it actually started to boot and then crashed. The RAM was obviously very defective...when I took that 1 stick out the computer booted normally even with parity on, if I tried to boot
  • Samsung seems to have pretty decent QC at this time. I have no issues with them. OTH, I have seen maxtors die with less than 2 years on them.
  • No way (Score:2, Funny)

    by Tablizer ( 95088 )
    High rate of failure? That's a bunch of
  • Seagate (Score:4, Insightful)

    by mabu ( 178417 ) on Friday March 02, 2007 @05:04PM (#18212390)
    After 12 years of running Internet servers, I won't put anything but Seagate SCSI drives in any mission critical servers. My experience indicates Seagate drives are superior. Who's the worst? Quantum. The only thing Quantum drives are good for is starting a fire IMO.
  • by dangitman ( 862676 ) on Friday March 02, 2007 @05:09PM (#18212478)
    Pick any two.

    I've noticed this personally. Now, anecdotal evidence doesn't count for a lot, and it may be a case that we are pushing our drives more. But back in the day of 40MB hard drives that cost a fortune, they used to last forever. The only drive I ever had fail on me in the old days were the Syquest removable HD cartridges, for obvious reasons. But even they didn't fail that often, considering the extra wear-and-tear of having a removable platter with separate heads in the drive.

    But these days, with our high-capacity ATA drives, I see hard drives failing every month. Sure, the drives are cheap and huge, but they don't seem to make them like they used to. I guess it's just a consequence of pushing the storage and speed to such high levels, and cheap mass-production. Although the drives are cheap, if somebody doesn't back up their data, the costs are incalculable if the data is valuable.

  • When I was in high school in 1995, I was a network intern. We had a 486 Novell Netware server for the high school building. The actual admin was a LOTR fan, and named it GANDALF, others were SAMWISE, etc. One day about four years ago, a friend of mine who worked for the school district calls me and says, "hey, I saw Gandalf in the dumpster today. I thought you might want him, so I grabbed him."

    Besides nostalgia, there wasn't a lot I could do with a giant, noisy 486 anymore, so I ended up just pulling the SC
  • by Tim Browse ( 9263 ) on Friday March 02, 2007 @05:38PM (#18212852)

    ...is that it detects SMART disk errors in normal use (i.e. you don't have to be watching the BIOS screens when your PC boots).

    When I was trying the Vista RC, it told me that my drive was close to failing. I, of course, didn't believe it at first, but I ran the Seagate test floppy and it agreed. So I sent it back to Seagate for a free replacement.

    About the only feature that impressed me in Vista, sadly. (And I'm not sure it should have impressed me, tbh. I'm assuming XP never did this as I've never seen/heard of such a feature.)

  • by CorporalKlinger ( 871715 ) on Friday March 02, 2007 @09:08PM (#18214468)
    I think one of the key problems here isn't necessarily the statistical methods used, it is that the CMU team was comparing real-life drive performance to the "ideal" performance levels predicted by the drive manufacturers. Allow me to provide two examples of this "apples to oranges" comparison problem.

    I have had two computers with power supply units that were "acting up." They ended up killing my hard drives on multiple occasions - Seagates, WD's, Maxtors, etc. It didn't matter what type of drive you put in these systems, the drive would die after anywhere from a week to two years. I later discovered that the power supplies were the problems, replaced them with brand new ones, and replaced the drives one last time. That was quite some time ago (years), and those drives, although small, still work, and have been transferred into newer computer systems since that time. The PSU was killing the drives; they weren't inherently bad or had a manufacturing defect. A friend of mine who lives in an apartment building constructed circa 1930 experienced similar problems with his drives. After just a few months, it seemed like his drives would spontaneously fail. When I tested his grounding plug, I found that it was carrying a voltage of about 30V (a hot ground - how wonderful). Since he moved out of that building and replaced his computer's PSU, no drive failures.

    The same type of thing is true in automobile mileage testing. Car manufacturers must subject their cars to tests based on rules and procedures dictated by state and federal government agencies. These tests are almost never real world - driving on hilly terrain, through winds, with the headlights and window wipers on, plus the AC for defrost. They're based on a certain protocol developed in a laboratory to level the playing field and ensure that the ratings, for the most part, are similar. It simply means when you buy a new car, you can expect that under ideal conditions and at the beginning of the vehicle's life, it should BE ABLE to get the gas mileage listed on the window (based on an average sampling of the performance of many vehicles).

    My point is that there really isn't a decent way to go about ensuring that an estimated statistic is valid for individual situations. By modifying the environmental conditions, the "rules of the game" change. A data-center with exceptional environmental control and voltage regulation systems, and top-quality server components (PSU's, voltage regulators, etc.) should expect to experience fewer drive failures per year than the drives found in an old chicken-shack data center set up in some hillbilly's back yard out in the middle of nowhere where quality is the last thing on the IT team's mind. It's impractical to expect that EVERY data center will be ideal - and since it's very very difficult to have better than the "ideal" testing conditions used in the MTTF tests - the real-life performance can only move towards more frequent and early failures. Using the car example above, since almost nobody is going to be using their vehicle in conditions BETTER than the ideal dictated by the protocols set forth by the government, and almost EVERYONE will be using their vehicles under worse conditions, the population average and median have nowhere to go but down. That doesn't mean the number is wrong, it just means that it's what the vehicle is capable of - but almost never demonstrates in terms of its performance - since ideal conditions in the real world are SO rare.

news: gotcha

Working...