Disk Failure Rates More Myth Than Metric 283
Lucas123 writes "Using mean time between failure rates suggest that disks can last from 1 million to 1.5 million hours, or 114 to 170 years, but study after study shows that those metrics are inaccurate for determining hard drive life. One study found that some disk drive replacement rates were greater than one in 10. This is nearly 15 times what vendors claim, and all of these studies show failure rates grow steadily with the age of the hardware. One former EMC employee turned consultant said, 'I don't think [disk array manufacturers are] going to be forthright with giving people that data because it would reduce the opportunity for them to add value by 'interpreting' the numbers.'"
There are only two kind of peeps... (Score:5, Insightful)
Marketplace can't function without good data (Score:5, Insightful)
The inevitable result is a race to the bottom. Buyers will reason they might was well buy cheap, because they at least know they're saving money, rather then paying for quality and likely not getting it.
Re:There are only two kind of peeps... (Score:5, Insightful)
Re:Never had a drive fail (Score:3, Insightful)
warranties (Score:5, Insightful)
What MTBF is for. (Score:5, Insightful)
So anyway.. MTBF is not intended as an indicator of a specific unit's reliability. It is a statistical measurement to calculate how many spares are needed to keep a large population of machines working. It cannot be applied to a single unit in the way it can be applied to a large population of units.
Perhaps the classical example is about the old tube-based computers like ENIAC, if a single tube has an MTBF of 1 year, but the computer has 10,000 tubes, you'd be changing tubes (on average) more than once an hour, you'd rarely even get an hour of uptime. (I hope I got that calculation vaguely correct)
Re:MTBF For Unused Drive? (Score:3, Insightful)
If I were running a data-based business I'd count that as a "failure" since I had to go deal with the drive, but the HD company probably wouldn't since no data was permanently lost.
Re:Never had a drive fail (Score:4, Insightful)
Re:There are only two kind of peeps... (Score:3, Insightful)
MTBF rate calculation method is flawed (Score:2, Insightful)
To make this sort of test work, it must be run over a much longer period of time. But in the process of designing, building, testing and refining disk drive hardware and firmware (software), there isn't that much extra time to test drive failure rates. Want to wait an extra 9 months before releasing that new drive, to get accurate MTBF numbers? Didn't think so. How many different disk controllers do they use in the MTBF tests, to approximate different real-world behaviors? Probably not that many.
Could they run longer tests, and revise MTBF numbers after the initial release of a drive? Sure, and many of them do, but that revised MTBF would almost always be lower, making it harder to sell the drives. On the other hand, newer drives are certainly available every quarter, so it may not be a bad idea to lower the apparent value of older drive models.
So, it's better to assume a drive will fail before you're done using it. They're mechanical devices with high-speed moving parts, very narrow tolerable ranges of operation (that drive head has to be far enough away from the platters not to hit them, but close enough to read smaller and smaller areas of data). Anyone who's worked in a data center, or even a small server room, knows that drives fail. When I've had around two hundred drives, of varying ages, sizes and manufacturers, in a data center, I observed a failure rate of five to ten drives per year. This is well below the MTBF for enterprise disk array drives (SCSI, FC, SAS, whatever), but drives fail. That's why we have RAID. Storage Review has a good overview of how to interpret MTBF values from drive manufactures [storagereview.com].
MTBF is a useful statistical measure (Score:4, Insightful)
Anecdotal reports of failures also need to consider the operating environment. If I have a server rack, and most servers in the rack have a drive failure in the first year, is it the drive design or the server design? Given the relative effort that usually goes into HDD design and box design, it's more likely to be due to poor thermal management in the drive enclosure. Back in the day when Apple made computers (yes, they did once, before they outsourced it) their thermal management was notoriously better than that of many of the vanilla PC boxes, and properly designed PC-format servers like the HP Kayaks were just as expensive as Macs. The same, of course, went for Sun, and that was one reason why elderly Mac and Sparc boxes would often keep chugging along as mail servers until there were just too many people sending big attachments.
One possibly related oddity that does interest me is laptop prices. The very cheap laptops are often advertised with optional 3 year warranties that cost as much as the laptop. Upmarket ones may have three year warranties for very little. I find myself wondering if the difference in price really does reflect better standards of manufacture so that the chance of a claim is much less, whether cheap laptops get abused and are so much more likely to fail, or whether the warranty cost is just built into the price of the more expensive models because most failures in fact occur in the first year.
RAID, If You Really Care (Score:2, Insightful)
Comment removed (Score:3, Insightful)
Re:MTBF For Unused Drive? (Score:3, Insightful)
The error-rate is in the order of 10^14 bits. Calculating this on a busy system, reading 1MBytes/s gives you approx. 10^7 seconds for each unrecoverable read failure. Or, that means it occurs 3 times per year on average. So, forget MTBF on busy systems and hope that your controller is able to do re-reads on a disk. Otherwise, your busy system/array is not going to last very long.
Re:Never had a drive *not* fail. (Score:2, Insightful)
Okay, when I think of backup, it's data backup.
I wouldn't backup applications or operating systems, just their configuration files.
Anyway, what I'd try doing is diff(1)ing all those backed up system files with the originals.
Or am I missing something completely, and it's some weird rootkit that's embedded in some wm* media file?
Re:Temperature is the key (Score:3, Insightful)
However, Google's data doesn't appear to have a lot of points when temperatures get over 45 degrees or so (as to be expected, since most of their drives are in a climate controlled machine room).
The average drive temperature in the typical home PC would be *at least* 40 degrees, if not higher. While it's been some time since I checked, I seem to recall the drive in my mum's G5 iMac was around 50 degrees when the machine was _idle_.
Google's data is useful for server room environments, but I'd be hesistant to extrapolate it to drives that aren't kept in a server room with a ~20 degrees C ambient temperature and have active cooling.
Typical misleading title (and bad article) (Score:3, Insightful)
Disks have two separate reliability metrics. The first is their expected life time. In general disks failure follows a "bathtub distribution". They are much more likely to fail at the first few weeks of operation. If they make it past this phase, they become very reliable - for a while anyway. Once their expected lifetime is reached, their failure rate starts steeply climbing.
The often quoted MTBF numbers express the disk reliability during the "safe" part of this probability distribution. Therefore, a disk with an expected lifetime of, say, 4 years, can have an MTBF of 100 years. This sounds theoretical until you consider that if you have 200 of such disks, you can expect that on average one of them will fail each year.
People running large data warehouses are painfully aware of these two separate numbers. They need to replace all "expired" disks, and also have enough redundancy to survive disk failures in the duration.
The article goes so far as to state this:
"When the vendor specs a 300,000-hour MTBF -- which is common for consumer-level SATA drives -- they're saying that for a large population of drives, half will fail in the first 300,000 hours of operation," he says on his blog. "MTBF, therefore, says nothing about how long any particular drive will last."
However, this obviously flew over the head of the author:
The study also found that replacement rates grew constantly with age, which counters the usual common understanding that drive degradation sets in after a nominal lifetime of five years, Schroeder says.
Common understanding is that 5 years is a bloody long life expectancy for a hard disk! It would take divine intervention to stop failures from rising after such a long time!
Re:Never had a drive *not* fail. (Score:3, Insightful)
Re:warranties (Score:3, Insightful)
So in this way a manufacturer can get away with a long warranty, without necessarily incurring a cost for unreliability.
Re:Typical misleading title (and bad article) (Score:3, Insightful)
Where did you pull these numbers from?
Re:Marketplace can't function without good data (Score:4, Insightful)
Also the manufacturer needs to specify the conditions of the test, temperature, humidity etc and customers requiring reliability need to ensure they run near those conditions.
If you do a 1000 hour test and all your drives have a design fault that cause a large proportion of them to fail after about 5000 hours usage you probablly won't notice the fault but 7 months down the line customers who run the drive 24/7 will.
The problem is of course that by the time you have done proper testing (= running the drives for thier expected lifespan under realistic operating conditions and seeing what proportion fail during that time and when) for a device with an expected lifetime in years the device is obsolete.
Re:There are only two kind of peeps... (Score:5, Insightful)
have everyone else mirror it." -Linus Torvalds
Except... (Score:3, Insightful)
Enough with the little sample sizes (Score:3, Insightful)