Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Everything You Know About Disks Is Wrong 330

modapi writes "Google's wasn't the best storage paper at FAST '07. Another, more provocative paper looking at real-world results from 100,000 disk drives got the 'Best Paper' award. Bianca Schroeder, of CMU's Parallel Data Lab, submitted Disk failures in the real world: What does an MTTF of 1,000,000 hours mean to you? The paper crushes a number of (what we now know to be) myths about disks such as vendor MTBF validity, 'consumer' vs. 'enterprise' drive reliability (spoiler: no difference), and RAID 5 assumptions. StorageMojo has a good summary of the paper's key points."
This discussion has been archived. No new comments can be posted.

Everything You Know About Disks Is Wrong

Comments Filter:
  • MTBF (Score:5, Interesting)

    by seanadams.com ( 463190 ) * on Tuesday February 20, 2007 @08:36PM (#18090970) Homepage
    MT[TB]F has become a completely BS metric because it is so poorly understood. It only works if your failure rate is linear with respect to time. Even if you test for a stupendously huge period of time, it is still misleading because of the bathtub curve effect. You might get an MTBF of say, two years, when the reality is that the distribution has a big spike at one month, and the rest of the failures forming a wide bell curve centered at say, five years.

    Suppose a tire manufacturer drove their tires around the block, and then observed that not one of the four tires had gone bald. Could they then claim an enormous MTBF? Of course not, but that is no less absurd than the testing being reported by hard drive manufacturers.
    • Re:MTBF (Score:5, Informative)

      by Wilson_6500 ( 896824 ) on Tuesday February 20, 2007 @08:45PM (#18091058)
      Um, but doesn't the summary of the paper say that there is no infant mortality effect, and that failure rates increase with time, and thus the bathtub curve doesn't actually apply?
      • Re:MTBF (Score:5, Insightful)

        by vtcodger ( 957785 ) on Wednesday February 21, 2007 @03:01AM (#18093480)
        ***Um, but doesn't the summary of the paper say that there is no infant mortality effect,***

        It does. But it also says -- repeatedly -- that the data is disk replacement data, NOT disk failure data. i.e. it's data on the number of problems that the user tech thought might be fixed by replacing the disk, not by the number of disks that actually failed. One might wonder if, for example, the response to a system failing while it was being set up or in early lifetime might not be to put the whole damn thing into a box and ship it back to the vendor rather than dink around trying to figure out what is wrong. That won't be recorded as a disk failure.

        The study is fine -- really it is. But, table 3 ought to give pause. It's quite clear that different data sets show quite different diagnostic patterns. We've got one set of data that says that power supplies, for example, are hardly ever replaced and a second set that says that they are the most frequently replaced item. There MAY be good reasons for this. But it could also be an indication that the technicians are incompetent, that the record keeping is erratic, or (and I'd seriously consider this one) that only certain kinds of failures are being recorded.

        Finally, I think someone really ought to mention that there is no way that a disk manufacturer is actually going to measure MTBFs of 100000 hours prior to printing up the data sheets. The problem is that there are only around 750 hours in a month. And you need a reasonable number of failures (many quality guys would say at least 4) in order to get a reasonably valid MTBF. In order to actually measure a six digit MTBF, the manufacturer would have to run maybe 500 units for a month. My guess is that isn't going to happen. If they have the production line producing 500 units, they are going to ship them. Manufacturer MTBF data are surely based on data from a handful of engineering and preproduction units plus a bunch of wild guesses.

        My guess, and it is just a guess, is that manufacturer MTBFs for disks are probably pretty much the MTBF goal in the drive specifications established before the design actually started.

        Incidentally, based on some experience with other sorts of high tech gadetry, if the engineering/preproduction units do fail during test, a failure analysis will be done, and steps will be taken to fix the problem. Problem's fixed. OK, we shouldn't count those failures since they won't happen any more. That's called "censoring failure data". Begin to get an idea why disk MTBFs might be pretty much pure fiction?

    • Re:MTBF? RTFA. (Score:5, Informative)

      by Vellmont ( 569020 ) on Tuesday February 20, 2007 @08:58PM (#18091198) Homepage
      You might get an MTBF of say, two years, when the reality is that the distribution has a big spike at one month, and the rest of the failures forming a wide bell curve centered at say, five years.


      Well, the article actually says that drives don't have a spike of failures at the beginning. It also says failure rates increase with time. So you're right that MTBF shouldn't be taken for a single drive, since the failure rate at 5 years is going to be much higher than at one.

      The other thing that the article claims is that the stated MTBF is simply just wrong. It mentioned a stated MTBF of 1,000,000 hours, and an observed MTBF of 300,000 hours. That's pretty bad. It's also quite interesting that the "enterprise" level drives aren't any better than the consumer level drives.
      • by bill_mcgonigle ( 4333 ) * on Tuesday February 20, 2007 @11:33PM (#18092480) Homepage Journal
        Well, the article actually says that drives don't have a spike of failures at the beginning.

        Hmm, the Google paper says they do, from 3-6 months (Figure 2).

        Which leaves us with confirmation that 50% of all studies are wrong.
        • by Moraelin ( 679338 ) on Wednesday February 21, 2007 @03:19AM (#18093550) Journal
          The two don't really contradict each other that much. Google's spike is relatively small and it's really a spike in the first 1-3 months. By the 6th month it's basically settled. In this paper half the time they graph in whole year increments, so that kind of a spike would be averaged into the first year. So, no, they don't contradict each other as such. And in at least one of the graphs by month in this paper (HPC1), there is something that looks like a spike in the first month.

          More importantly, they don't contradict each other in respect to the rest of the curve. With or without that spike, the curve just doesn't look like the bathtub fairy tale that drive makers try to bullshit us with. You're led into a false sense of security that, basically, if a drive didn't fail within the first couple of months, then it'll be at a (nearly) constant and very small probability to fail for the whole next 5 years, and only then it starts rising again. Basically that if you upgrade your drives every 4 years, whatever didn't fail within 2-3 months, heck, it's very unlikely to fail. And the curve just doesn't look that way. The probability to fail rises continuously, and (again whether that spike actually exists or not) after as little as 1 year you're above the starting height of the "bathtub" already.

          In retrospect, I don't even know when and why the "bathtub" myth even started. The bathtub distribution was originally for stuff like electronic components, without moving parts. For something with mechanical wear and tear like a hard drive, who the heck came up with the idea that the same curve must apply? Shouldn't it have been common sense all along that it linearly gets more wear and tear?

          Both papers also tell us that the manufacturers' MTBF numbers are, basically, pure bullshit. They're some impressive number put there for the benefit of the marketting department, not because someone at Seagate/Maxtor/whatever actually believes that number.

          In retrospect, again, we should have had an alarm signal when the manufacturers lowered there warranty from 3 to 1 year. If indeed there was (1) the MTBF they claim, and more importantly (2) the bathtub curve they claim, the reduction wouldn't have even made too much of a difference. I mean, most drives would have failed withing a couple of months, followed by barely a trickle of deffective drives for the next 5 years straight. Why bother doing the bad-for-marketting thing of lowering the warranty in that scenario? Or did they already know that they lie?

          And finally, a very important point is that (again, bullshit marketting claims be damned) there is no difference in reliability between cheap SATA and expensive SCSI and FC. There is this assumption permeating the whole society that if something is expensive, it _must_ automatically be better and more durable than the cheap stuff. That if you buy a big plasma TV, it's automatically better and last longer than an el-cheapo CRT. (Yeah, right. Plasma is actually known for its decay over time.) A whole edifice of consumerism, conspicuous consumption, and SFV (Stupid Fashion Victim) syndrome is based on that bullshit excuse to spend more than you need to spend. "Yeah, but it'll be better and last longer!" Yeah, right.

          I've actually met people who wouldn't even _consider_ putting a ATA drive in any kind of server. "What, you're going to put your enterprise data on ATA drives???" (Said with a perplexed look, as if I had proposed flushing it to /dev/nul or something.) Well, now we know they're not actually any worse. If you don't actually need the extra bandwidth or lower latency or a 15,000 RPM drive, then you can just as well drop a SATA drive in that machine. Even for 10,000 RPM, 4.5ms, there are the WD Raptor drives with SATA interface, and they're cheaper than a SCSI or FC drive. For a lot of stuff you don't even need those, a 7200 RPM will do perfectly fine.
    • Re:MTBF (Score:4, Interesting)

      by gvc ( 167165 ) on Tuesday February 20, 2007 @09:04PM (#18091252)

      MT[TB]F has become a completely BS metric because it is so poorly understood. It only works if your failure rate is linear with respect to time. Even if you test for a stupendously huge period of time, it is still misleading because of the bathtub curve effect. You might get an MTBF of say, two years, when the reality is that the distribution has a big spike at one month, and the rest of the failures forming a wide bell curve centered at say, five years.
      The simplest model for survival analysis is that the failure rate is constant. That yields an exponential distribution, which I would not characterize as a bell curve. The Weibull distribution more aptly models things (like people and disks) that eventually wear out; i.e. the failure rate increases with time (but not linearly).

      With the right model, it is possible to extrapolate life expectancy from a short trial. It is just that the manufacturers have no incentive to tell the truth, so they don't. Vendors never tell the truth unless some standardized measurement is imposed on them.

    • Re: (Score:3, Informative)

      by kidgenius ( 704962 )
      Well, I guess you don't really understand reliability then. You also don't understand MTBF/MTTF (hint: they aren't the same) What they have said is a big "no duh" to anyone in the field. MTTF will work regardless of whether or not your failure rate is linear with time. Also, there are other distribution of failure beyond just exponential, such as the Weibull. Exponential is a subset of the Weibull. Using this distribution you can accurately calculate an MTTF. Now, the MTBF will not match the MTTF init
      • Re: (Score:3, Insightful)

        by kidgenius ( 704962 )
        I'm also going to add to my statement and mention that the authors of the article do not understand MTTF. They have calculated MTBF, not MTTF. They are not the same. In fact, they have assumed that the drives fail in a random way by doing a simple hours/failures. They need to really to look at failures and suspensions and perform a weibull analysis to see how close their stuff is to the manufacturers stated values.
    • Re:MTBF (Score:4, Insightful)

      by 6th time lucky ( 811282 ) on Tuesday February 20, 2007 @10:56PM (#18092216)
      MT[TB]F has become a completely BS metric because it is so poorly understood.
      Dont forget the M in MTBF. Its mean [wikipedia.org] (stastically speaking...). That means (!) that some might fail now, some later, but on average they last a while. Manipulate that information and you might get 1,000,000 hrs MTBF, but you have to account for and not forget about the worst case senario (thats what a failure is) which might be the next drive is going to fail *now*, which is why RAID5 isnt as good as it might seem looking at the average statistics.

      Backup, backup, backup has always been my motto (and thats just personal data). Interesting that Google thinks this is the way to go also (i.e. 3 copies of all data)
  • by DogDude ( 805747 ) on Tuesday February 20, 2007 @08:41PM (#18091024)
    Every single mechanism with moving parts will fail. It's just a matter of when. In a few years, when everybody is using solid state drives, people will look back and shake their heads, wondering why we were using spinning magnetic platters to hold all of our critical data for such a long time.
    • Re: (Score:2, Interesting)

      by Nimloth ( 704789 )
      I thought flash memory had a lower read/write cycle expectancy before crapping out?
      • Re:moving parts (Score:5, Informative)

        by NMerriam ( 15122 ) <NMerriam@artboy.org> on Tuesday February 20, 2007 @09:19PM (#18091382) Homepage

        I thought flash memory had a lower read/write cycle expectancy before crapping out?


        They do have a limited read/write lifetime for each sector, BUT the controllers automatically distribute data over the least-used sectors (since there's no performance penalty to non-linear storage), and you wind up getting the maximum possible lifetime from well-built solid-state drives (assuming no other failures).

        So in practice, the lifetime of modern solid state will be better than spinning disks as long as you aren't reading and writing every sector of the disk on a daily basis.
        • Re: (Score:2, Informative)

          by scoot80 ( 1017822 )
          Flash memory will have about 100,000 write cycles before you will burn it out. As parent mentioned, a controller would write that data to several different locations, at different times, thus increasing the lifetime. What this would mean though is that your flash disk will be considerably bigger then what it can actually hold.
          • Re:moving parts (Score:4, Interesting)

            by blackest_k ( 761565 ) on Tuesday February 20, 2007 @10:37PM (#18092090) Homepage Journal
            Still doesn't mean it will last, got a 1 gig usb flash drive here dead in less than 8 weeks and very few read and writes. It will not identify itself. It might have 99,900 write cycles left but its still trashed.
            Lets face it there is no reliable storage media, the only way to be safe is multiple copies.

               
        • by tedgyz ( 515156 ) *
          So is there a MTBF for solid state drives? I'm serious.
    • by theReal-Hp_Sauce ( 1030010 ) on Tuesday February 20, 2007 @09:05PM (#18091254)
      Forget Solid State Drives, soon we'll have Isolinear Chips. It wont matter if they fail or not because as long as the story line supports it Geordie can re-route the power through some other subsystem, Data can move the chips around really quickly, Picard can "make it so", and after it's all over with Wesley can wear a horrible sweater and deliver a really cheese line.

      -C
    • It is just a matter of time. Depending on the technology (eg. flash) it might be a short to medium time or a long time.

      If something has an MTBF of 1 million hours (that's 114 years or so), then you'll be a long time dead before it fails.

      At this stage, the only reasonable non-volatile solid state alternative is NAND flash which costs approx 2 cents per MByte ($20/Gbyte) and dropping. NAND flash has far slower transfer speeds than HDD, but is far smaller, uses less power and is mechanically robust. NAND flash

      • Re: (Score:3, Informative)

        by Detritus ( 11846 )
        MTBF tells you the failure rate over the item's service lifetime, which for hard disks, is commonly five years.
    • by brarrr ( 99867 )
      Says you!

      I'm going to live forever!
    • Unfortunately, we don't have solid state storage that doesn't fail either. I've had more RAM chips die than hard drives. And I know that you aren't suggesting that flash memory doesn't fail. Although I've never had flash memory fail, I've only ever used it for digicams and mp3 players, and not for the kind of usage pattern you would get from a hard drive.
    • Re:moving parts (Score:5, Informative)

      by wik ( 10258 ) on Tuesday February 20, 2007 @09:45PM (#18091610) Homepage Journal
      Not true. Transistors at really small dimensions (e.g., 32nm and 22nm processes) will experience soft breakdown during (what used to be) normal operational lifetimes. This will be a big problem in microprocessors because of gate oxide breakdown, NBTI, electromigration, and other processes. Even "solid-state" parts have to tolerate current, electric fields, and high thermal conditions and gradually break down, just like mechanical parts. Don't go believing that your storage will be much safer, either.
    • Re: (Score:3, Informative)

      If you look at the numbers for the failure of the system RAM and assume that most machines have much, much more disk space than RAM, SSD's don't make sense. They are faster, but you won't get better MTTB's. On the HPC1 and COM1 groups of machines, the memory was replaced almost as often as the hard drives. If you had to replace all that HD space with RAM, your failure rate would go though the roof.
  • i'll tell you (Score:3, Interesting)

    by User 956 ( 568564 ) on Tuesday February 20, 2007 @08:43PM (#18091042) Homepage
    Bianca Schroeder, of CMU's Parallel Data Lab, submitted Disk failures in the real world: What does an MTTF of 1,000,000 hours mean to you?

    It means I should be storing my important, important data on a service like S3. [amazon.com]
  • by cookieinc ( 975574 ) on Tuesday February 20, 2007 @08:49PM (#18091092)

    Everything You Know About Disks Is Wrong
    Finally, a paper which disspells the common myth that disks are made of boiled candy.
  • Amazing! (Score:3, Insightful)

    by Dr. Eggman ( 932300 ) on Tuesday February 20, 2007 @08:52PM (#18091118)
    You mean to tell me these people have found hard drives that don't fail beyond repair by the end of the first year? I've never encountered a HD that has done this, much to the despare of my wallet. Now, I am serious, what is wrong with the harddrives I choose that kills them so quickly? Is Western Digital no longer a good manufacturer? Should I maybe not run a virus check nightly and a disk defrag weekly? Is 6.5GB of virtual memory too much to ask? Of course not, the manufacturers are just making crappier hds. This article has told me one thing: it's time to get a RAID setup. I've been looking at RAID 5, but two things still trouble me, the price and the performance hit. Does anyone have any information on just how much a performance hit I might experience if I have to access the HD a lot?
    • If anything, RAID should make your hard disk access a lot faster. That is, unless you go for software RAID, which will put a hit on your processor. However, I think if you're going to make the investment to go with RAID 5, then buying a proper hardware controller won't add a significant amount to the cost of your set up.
      • Re: (Score:3, Informative)

        by petermgreen ( 876956 )
        If anything, RAID should make your hard disk access a lot faster. That is, unless you go for software RAID, which will put a hit on your processor.
        afaict Linux software raid is actually pretty good nowadays at least as long as you stick to the basic raid levels

        beware of the very common fake hardware (e.g. really software but with some bios and driver magic to make the array bootable and generally behave like hardware raid from the users point of view) controllers. Theese often have far worse performance tha
      • Re: (Score:3, Informative)

        by drsmithy ( 35869 )

        That is, unless you go for software RAID, which will put a hit on your processor.

        This myth needs to die. No remotely modern processor takes a meaningful performance hit from the processing overhead of RAID.

        However, I think if you're going to make the investment to go with RAID 5, then buying a proper hardware controller won't add a significant amount to the cost of your set up.

        Decent RAID5-capable controllers are hundreds of dollars. Software RAID is free and - in most cases - faster, more flexible a

      • Re: (Score:3, Interesting)

        by 10Ghz ( 453478 )
        "If anything, RAID should make your hard disk access a lot faster. That is, unless you go for software RAID, which will put a hit on your processor."

        Since we are talking about IO-bound operations, does that matter? I mean, CPU is hardly ever the bottleneck these days, the hard-drive quite often is. So even if soft-RAID puts more load on the CPU, does it cause any slowdown? Espesially if it makes IO faster?
    • by Rakishi ( 759894 )
      I'd be tempted to say that the problem may be partially on your end either due to having improper conditions (heat, etc.) or bad power/power supplies. Likewise if you get hard drives with a 1 or 3 year warranty then don't expect too much from them (I mean if they're dead in a year then you're not out much as the warranty should cover them... well unless you buy some dirt cheap refurbished 90-day warranty pos).

      Personally I backup all my data to a server running raid 1 (hard drives are relatively cheap and ra
    • I, on the other hand, have personally experienced one HD failure -- a Western Digital drive, as it happens -- in my LIFE.
    • ... was Western Digital EVER a good manufacturer?

      Seriously. The only dead drives I've ever seen are either IBM Deathstars (known by that name so completely that I don't know what the actual brand name is... 'disk star' perhaps?) and Western Digital drives. I generally buy Seagate or Hitachi drives, and I've never had a failure. Usually I run out of space and have to upgrade before the drives die. IBM drives other than the Deathstars seem to do ok as well.
      • by jafiwam ( 310805 )
        "Desk Store" and "Serve Store" I believe.

        I lost two raid 5 setups to those because they failed faster than we could replace them. (1 spare and several days for shipping) Out of the 7 we had in two servers, 5 of them failed so the nick name is not undeserved in my opinion.
        • Re: (Score:2, Insightful)

          by BagOBones ( 574735 )
          Those Deathstars as I like to call them where really really bad. If you build your servers with a strong support contract from your vendor you can get really fast drive replacement times. We run completely on Dell servers with GOLD level support. I had a drive fail in my primarily file server, I had a replacement drive on my reception desk in 4 hours from putting my phone down to report the problem. The controller supported background rebuilding so the users didn't even feel the loss.

          I you build your own se
      • IMHO, WD is STILL a good manufactuer. I've never had a problem with them. I sold a 640 MB drive to someone a few years ago, I believe it STILL works. I personally buy WD and Seagate now (I've always stayed away from Maxtor - I hope Seagate acquiring them improves the reliability of Maxtor drives, and doesn't affect Seagate drives at all). My MacBook Pro has a Hitachi drive (perpendicular, woot!). Anecdotal evidence doesn't say much, I've read good and bad things about pretty much any hard drive manufac
    • Now, I am serious, what is wrong with the harddrives I choose that kills them so quickly?

      First guess? Your system has a dirty power supply. (Unless you have a high-quality PSU and have a line-noise-filtering UPS, this is entirely possible.)

      This article has told me one thing: it's time to get a RAID setup. I've been looking at RAID 5, but two things still trouble me, the price and the performance hit. Does anyone have any information on just how much a performance hit I might experience if I have to access
    • Hmm - I had a HDD fail on me today, while making a backup image. It didn't like all the reading activity it seems. Sigh...
    • Re: (Score:3, Interesting)

      by Kadin2048 ( 468275 )
      Somewhere around I have an Apple 20MB hard drive that is getting on 15 years old. Sure, it hasn't seen a lot of usage recently, but I still fire it up every once in a while. (It makes the greatest turbine-like startup sound; seriously, it's like a 747.) Connects to the floppy disk controller. Has its own power supply.

      I'm sure there are people around with even older, still-working-fine gear. A while back, I saw some DEC disk packs for the early removable-platter hard drives selling on eBay, as pulls-from-wor
  • infant mortality (Score:5, Insightful)

    by Anonymous Coward on Tuesday February 20, 2007 @08:57PM (#18091192)
    I suspect that the 'infant mortality' syndrome really has to do with the drives being abused before they are installed in the machines (getting dropped during shipping for example)

    the large shops like these studies are looking at get the drives in bulk directly from the manufacturer, the rest of us who have to go through several middle-men before we get our drives have more of a chance that something happened to them before we received them.

    David Lang
    • Re: (Score:3, Insightful)

      I think the myth of infant mortality is that if the drive works in the first week/month it will work perfectly until the warranty/magic dust wears off and you don't have to worry about reliability until then. What they saw in the real world was that some drives had consistantly reduced performance and lifespan right from the start. You can't operate on the assumption that I replaced 5 drives so I'm good for 3 years and not keep spares or backups ready... the Google report takes this another step because t
  • Comment removed (Score:4, Insightful)

    by account_deleted ( 4530225 ) on Tuesday February 20, 2007 @08:58PM (#18091208)
    Comment removed based on user account deletion
    • So... Server grade HD's have a longer average life simply because more of them are installed in servers?
    • by pla ( 258480 )
      Or maybe powering up the drives off and on is more stressful to the components

      You just posed the one question to which I'd actually have liked to know the answer... Turn it on and off as needed (minimize runtime), or leave it on all the time if you'll use it at least a few times per day (minimize power cycling).

      I know that counts as something of a religious issue among geeks, but I'd still have liked a good solid answer on it... It even has implications for whether or not we should let our non-laptops
      • Re: (Score:3, Informative)

        I never had a hard drive fail. I buy one more new one a year, and drop the smallest one. I run 4 at a time in a beige box PC. They are a mix of all sorts of manufacturers (usually from a CompUSA sale for less than $0.30/GB).

        - I never turn off the PC.
        - The case has no cover.
      • Re: (Score:3, Interesting)

        by Reziac ( 43301 ) *
        Well, I can connect my own anecdots ;) Once they're fully set up, my everyday machines are never powered down again (except to upgrade the hardware), nor do the HDs spin down. They are also on good quality power supply units, AND are protected by a good UPS, AND have good cooling. Those 3 points can make all the difference in the world to their longevity, regardless of use patterns.

        Right now my everyday HDs number thus:

        6.4GB W.D. -- new in 1998, has always run 24/7. No SMART but probably has upward of 70,0
    • by Lumpy ( 12016 ) on Tuesday February 20, 2007 @09:38PM (#18091516) Homepage
      Or she forgot to put in the part that Enterprise drives are replaced on a schedule BEFORE they fail. At Comcast I used to have 30 some servers with 25-50 drives each scattered about the state. every hard drive was replaced every 3 years to avoid failures. These servers (Tv ad insertion servers) made us between $4500-13,000 a minute they were in operation in spurts of 15 minutes down 3-5 minutes inserting ad's. Downtime was not acceptable so we replaced them on a regular basis.

      Most enterprise level operations that relies on their data replace drives before they fail. In fac tthe replacement rate was increased to every 2 years not for failure prevention but for capacity increases.
      • by MadMorf ( 118601 ) on Tuesday February 20, 2007 @11:01PM (#18092246) Homepage Journal
        Most enterprise level operations that relies on their data replace drives before they fail.

        You worked at an unusual place!

        I'm a Tech Support Engineer for a large storage system manufacturer and I can tell you that NONE of our customers replace disks before they fail unless our OS detects a "predictive failure" for the disk. Our customers are some of the biggest names in business from all over the planet.
        • Re: (Score:3, Interesting)

          by yoprst ( 944706 )
          It's broadcasting, dude! No downtime is allowed. Here in Soviet Russia we (broadcasters) do exactly the same, except that we prefer 2-year period.
    • Do people actually shut their desktops off?

      The concept is bizarre to me. I haven't shut my desktop off on a daily basis in probably 15 years (or about as long as I've been running Linux as my desktop).

      This has nothing to do with the OS though. I don't power cycle any of my important electronics more than needed because I do believe it stresses them. My (PC) computers have always run 24/7 unless there is an electrical storm passing over or I don't have power.

      The last time I power cycled on a daily basis w
      • Re: (Score:3, Informative)

        by the_womble ( 580291 )
        There are some good reasons to shut down:

        1) Electricity consumption
        2) Power cuts (unless you have a UPS and software for a clean shutdown installed, what happens if there is a power cut while you are away?).
        3) Power fluctuations (my power supply blew dramatically after one a few months ago) and lightning.
        4) Heat (in a hot climate)
  • Cyrus IMAP (Score:3, Interesting)

    by More Trouble ( 211162 ) on Tuesday February 20, 2007 @09:05PM (#18091258)
    From StorageMojo's article: Further, these results validate the Google File System's central redundancy concept: forget RAID, just replicate the data three times. If I'm an IT architect, the idea that I can spend less money and get higher reliability from simple cluster storage file replication should be very attractive.

    For best-of-breed open source IMAP, that means Cyrus IMAP replication.
    :w
  • by Thagg ( 9904 ) <thadbeier@gmail.com> on Tuesday February 20, 2007 @09:10PM (#18091312) Journal
    What's interesting about both of these papers is that previously-believed myths are shown to be, in fact, myths.

    The Google paper shows that relatively high temperatures and high usage rates don't affect disk life.
    The current paper shows that interface (SCSI, FC vs ATA) had no effect either. The Google paper shows
    a significant infant mortality that the CMU paper didn't, and the Google paper shows some years of flat
    reliability where the current paper shows decreasing reliability from year one.

    The both show that the failure rate is far higher than the manufacturers specify, which shouldn't come
    as a surprise to anybody with a few hundred disks.

    I'm particularly pleased to see a stake driven through the heart of "SCSI disks are more reliable."
    Manufacturers have been pushing that principle for years, saying that "oh, we bin-out the SCSI disks
    after testing" or some other horseshit, but it's not true and it's never been true. The disks are
    sometimes faster, but they're not "better".

    Thad
  • Further, these results validate the Google File System's central redundancy concept: forget RAID, just replicate the data three times. If I'm an IT architect, the idea that I can spend less money and get higher reliability from simple cluster storage file replication should be very attractive.

    Someone needs to hurry up and write a good cross platform clustering file system solution. Something that encourages a company to buy bigger, better value HD's for their desktops so they can be used as redundant storage.

  • No its 131072 Does noone care about base-2 anymore?? /To the sarcasm disabled.. Its a joke..
  • Software RAID FTW!!

    In all seriousness, in truly critical storage you save your stuff under a RAID1. RAID5 is simply too unreliable for the task(not to mention that those controllers aren't exactly cheap).

    So save yourself trouble, money, and grief, and just user logical volume management to replicate drives.

  • by gelfling ( 6534 ) on Tuesday February 20, 2007 @09:32PM (#18091474) Homepage Journal
    I wonder if anyone looked at what actually failed in the drives? An arm, a platter, an actuator, a board, an MPU?

    Would an analysis tell us that SSDs are not only faster but more reliable and if so by how much?
  • forget RAID? (Score:3, Informative)

    by juventasone ( 517959 ) on Tuesday February 20, 2007 @09:38PM (#18091520)

    Translation: one array drive failure means a much higher likelihood of another drive failure ... Further, these results validate the Google File System's central redundancy concept: forget RAID, just replicate the data three times.

    The fact that another drive in an array is more likely to fail if one has already failed makes a lot of sense, but the conclusion to forget RAIDs doesn't. Arrays are normally composed of the same drive model, even the same manufacturing batch, and are in the same operating environment. If something is "wrong" with any of these three variables, and it causes a drive to fail, it's common sense the other drives have a good chance at following. I've seen real-world examples of this.

    In my real-world situations, the RAID still did it's job, the drive was replaced, and nothing was lost, despite subsequent failure of other drives in the array. Sure you can get similar reliability at a lower price by replicating data, but I think that's always been understood as the case. Furthermore, as someone else in the forum mentioned, enterprise-class RAIDs are often used primarily for performance reasons. A modern hardware RAID controller (with a dedicated processor and ram) can create storage performance unattainable outside of a RAID.

  • by Anonymous Coward
    is neither working nor broken... Unless you look at it of course ;)
  • by RebornData ( 25811 ) on Tuesday February 20, 2007 @09:43PM (#18091588)
    What's interesting to me is that neither of these papers mentions the issue of pre-installation handling. The good folks over at Storage Review [storagereview.com] seem to be of the opinion [storagereview.com] that the shocks and bumps that happen to a drive between the factory and the final installation are the most significant factor in drive reliability (much more than brand, for example).

    The google paper talks a bit about certain drive "vintages" being problemmatic, but I wonder if they buy drives in large lots, and perhaps some lots might have been handled roughly during shipping. If they could trace back each hard drive to the original order, perhaps they could look to see if there's a correlation between failure and shipping lot.

    -R

  • I doubt MTBF fits into anyone's thoughts when buying a drive, unless they are buying bulk or such for a business and have to justify the choice. I am only talking about home use here.

    Personally I have only ever had one drive go on me (a quantum scirroco) in 10 years. For myself, and most home users, that's a great track record. On the other hand, I have had friends and relatives who's drives just up and quit. New ones, old one, many brands. As long as you buy a major brand, they seem to be more or less equa
  • all this is moot (Score:3, Insightful)

    by billcopc ( 196330 ) <vrillco@yahoo.com> on Tuesday February 20, 2007 @09:56PM (#18091698) Homepage
    Hard drives die often because the manufacturers build them cheaply, the same as every other component in a PC. Why would they ever make a bulletproof hard drive ? They'd go out of business!

    Sure, some of them end up being replaced under warranty, but a lot of them don't, and so Maxtor/IBM/Hitachi make another buck off your sorry ass. There isn't a sane server admin that doesn't keep a set of spares in his desk drawer, because it's not a question of "if" it dies but WHEN. Hell, most decently-geared techies have a whole box of hard drives, pre-mounted in hotswap bays ready to rock. And if it weren't for the fact that I was just laid off a month ago, I'd be buying a couple spare SATA drives myself, I just have a funny feeling something's going to go tits up in my media server. I haven't had any warnings or hiccups, but I just know the Seagate devil's planning his move, waiting for 2 drives to start straying so he can kill my Raid-5 nice and fast. Hard drives are little more than Murphy's Law in a box.
  • by tedgyz ( 515156 ) * on Tuesday February 20, 2007 @09:58PM (#18091726) Homepage
    All the hard drives I installed in my family's computers have failed in the last 5 years - including mine. :-(

    Waaaah! They cry, when I tell them there is no hope for the family photos, barring a media reclamation service == $$$

    I tell everyone: "Assume your hard drive will fail at any moment, starting now! What is on your hard drive that you would be upset if you never saw it again?"
    • > I tell everyone: "Assume your hard drive will fail at any moment, starting now! What is on your
      > hard drive that you would be upset if you never saw it again?"

      True enough, I use a similar warning. Mine is, "Don't leave anything on your hard drive you care about. If you manage to make it a year without reloading Windows the drive can crap out with no warning. Burn anything you can't download again to a CD/DVD."

      Personally I don't have to worry about Windows and I have a RAID5 at home.... but I stil
  • by AllParadox ( 979193 ) on Tuesday February 20, 2007 @10:21PM (#18091944)
    As mechanical devices, hard drives are appallingly reliable.

    The electronics on the hard drive rank as major players in heat generation in the boxen.

    Heat kills transistorized components.

    "Hard Drive Data Recovery" companies often have nothing more sophisticated than a hard drive buying program, and very competent techs soldering and unsoldering drive electronics. They buy a few each of most available hard drives, as the drives appear on the market. When a customer sends them a hard drive for "recovery", the techs find a matching drive in inventory, disconnect the electronics, and replace the electronics in the drive. The percentage of drive failures due to mechanical failure is very low.

    When I bought a desktop computer for an unsophisticated family member, I also purchased and installed a drive cooler - a special fan that blows directly on the drive electronics.

    I was very concerned about MTBF. I just assumed that the manufacturer's information was totally irrelevant to my situation - a hard drive in a corner of the tower, covered with dust, and no air circulation.

    I occasionally pick up used equipment from family and friends. Usually, it is broken. Often, it is the hard drive. What is amazing is not that they failed, but that they lasted so long with a 1.5 inch coating of insulating dust.

    I suspect this would also explain the rising failure rate with time. Nobody seems to clean the darned things. They just sit and run 24/7/365, until they fail.
    • Sorry to burst your bubble, but the Google paper claims that temperature has no effect on the failure rate.
  • by cats-paw ( 34890 ) on Tuesday February 20, 2007 @11:52PM (#18092598) Homepage
    I keep hearing this persistent rumor that it's disk spin-up which is the most significant contribution to disk failure. The moral of the story is that systems which are left on 24/7 are less likely to see HD failures than systems turned on/off everyday.

    Now if that's really true, wouldn't it be quite simple for the manufacturers to simply spin-up the disk more slowly by putting in very simple and reliable motor control circuitry ?

    Does anyone have any real evidence, i.e. not anecdotal, that this is really true.

We are Microsoft. Unix is irrelevant. Openness is futile. Prepare to be assimilated.

Working...