Disk Failure Rates More Myth Than Metric 283
Lucas123 writes "Using mean time between failure rates suggest that disks can last from 1 million to 1.5 million hours, or 114 to 170 years, but study after study shows that those metrics are inaccurate for determining hard drive life. One study found that some disk drive replacement rates were greater than one in 10. This is nearly 15 times what vendors claim, and all of these studies show failure rates grow steadily with the age of the hardware. One former EMC employee turned consultant said, 'I don't think [disk array manufacturers are] going to be forthright with giving people that data because it would reduce the opportunity for them to add value by 'interpreting' the numbers.'"
Never had a drive *not* fail. (Score:5, Informative)
My anecdotal converse is I have never had a hard drive not fail. I am a bit on the cheap side of the spectrum, I'll admit, but having lost my last 40GB drives this winter I now claim a pair of 120s as my smallest.
I always seem to have a use for a drive, so I run them until failure.
Re:Failure rates ! warranty period. (Score:5, Informative)
Warranty periods for 750 gig and 1 terabyte drives from Western Digital [zipzoomfly.com], Samsung [zipzoomfly.com], and Hitachi [zipzoomfly.com], are 3 years to 5 years according to the info on zipzoomfly.com.
A one year warranty doesn't seem that common. External drives seem to have one year warranties, but even SATA drives at Best Buy mostly have 3 years
Re:What MTBF is for. (Score:4, Informative)
Re:Never had a drive fail (Score:4, Informative)
Dirty, spikey power is a much larger problem. A few years back I had 3 or 4 nearly identical WD 80gig drives die within a couple of months of each other, They were replaced with identical drives that are still chugging along find all this time later. The only major difference is that I gave each system a cheapo UPS.
Being somewhat I cheap, I tend to use disks until they wear out completely. After a few years I shift the disks to storing things which are permanently archived elsewhere or swap. Seems to work out fine, only problem is what happens if the swap goes bad while I'm using it.
Re:Temperature is the key (Score:4, Informative)
On the other hand, I had a Samsung disk that ran at 40 C tops, in a worse drive bay too! The Maxtor one had free air passage in the middle bay (no drives nearby), where the Samsung was side-by-side with the metal casing.
So I'm thinking there can be some measurable differences between drive brands, and a study of this, along with perhaps relationship with brand failure rates would be most interesting!
Re:Never had a drive fail (Score:4, Informative)
Build your own USB drives (Score:4, Informative)
I purchased a 500GB Western Digital My Book about a year and a half ago. I figured that a pre-fab USB enclosed drive would somehow be more reliable than building one myself with a regular 3.5" internal drive and my own separately purchased USB enclosure (you may dock me points for irrational thinking there). Of course, I started getting the click-of-death about a month ago, and I was unpleasantly surprised to discover that the warranty on the drive was only for 1 year, rather than the 3 year warranty that I would have gotten for a regular 3.5" 500GB Western Digital drive at the time. Meanwhile, my 750GB Seagate drive in a AMS VENUS enclosure has been chugging along just fine, and if it fails sometime in the next four years, I will still be able to exchange it under warranty.
The moral of the story is that, when there is a difference in the warranty periods (i.e., 1 year vs. 5 years), it makes a lot more sense to build your own USB enclosed drive rather than order a pre-fab USB enclosed drive.
Re:What MTBF is for. (Score:4, Informative)
long story short -- a 3 year old drive will not have the same MTBF as a brand new drive. And a MTBF of 1 million hours doesn't mean that the median drive will live to 1 million hours.
Re:Never had a drive fail (Score:3, Informative)
3.8GB drive: failed
45GB drive: failed
2x500GB drive: failed
Still working:
9GB
27GB
100GB
120GB
2x160GB
2x250GB
3x500GB
2x750GB
3x500GB external
However, in all the cases they've been the worst possible. The 45GB drive was my primary drive at the time with all my recent stuff. The 2x500GB were in a RAID5, you know what happens in a RAID5 when two drives fail? Yep. Right now I'm running 3xRAID1 for the important stuff (+ backup), JBOD on everything else.
Re:What MTBF is for. (Score:3, Informative)
Re:Never had a drive fail (Score:5, Informative)
Re:Never had a drive fail (Score:3, Informative)
Excess heat can cause the lubricant of a hd to go bad and causes weird noises, also logic board failures/head positioning failures cause quite a racket.
In my experience most drives fail without any indications from smart tests, ie. logic board failures, bad sectors are quite rare nowadays.
Re:Temperature is the key (Score:3, Informative)
On the other hand, I had a Samsung disk that ran at 40 C tops, in a worse drive bay too! The Maxtor one had free air passage in the middle bay (no drives nearby), where the Samsung was side-by-side with the metal casing.
Air is a much better insulator than metal.
MTBF assumes drives are replaced every few years (Score:3, Informative)
Of course, I think this is another deceptive definition from the hard drive industry... To me, the drive's lifetime ends when it fails, not "5 years".
Source: http://www.rpi.edu/~sofkam/fileserverdisks.html [rpi.edu]
Re:Temperature is the key (Score:2, Informative)
To get a meaningful result, it would require taking a population of the same drive and comparing the effects of temperature on it.
Re:MTBF For Unused Drive? (Score:5, Informative)
If you think an MTBF of 100 years means the disk will last 100 years you're bound to be disappointed, because that's not what it means. MTBF is calculated in different ways by different companies, but generally there are at least two numbers you need to look at, MTBF and the design or expected lifetime. A disk with an MTBF of 200 000 hours and a lifetime of 20 000 hours means that 1 in 10 are expected to fail during their lifetime, or with 200 000 disks one will fail every hour. It does not mean the average drive will last 200 000 years. After the lifetime is over all bets are off.
In short, the MTBF is a statistical measure of the expected failure rate during the expected lifetime of a device, it is not a measure of the expected lifetime of a device.
Re:MTBF For Unused Drive? (Score:4, Informative)
If you have 10,000 drives, and the failure is 1 in 1,000,000 hours, you will have a failure every 100 hours.
Here's a good document on disk failure information:
http://research.google.com/archive/disk_failures.pdf [google.com]
Re:Never had a drive *not* fail. (Score:5, Informative)
I administer a small server, which runs its services in virtual sandboxes. One physical box, but through KVM the Apache/PHP/MySQL is in one sandbox, the SMTP/IMAP is in another, etc. Each VM image is about 20GB, give or take, and the machine has two physical hard drives. My backup is periodic, and incremental. And the backup alternates between the drives... at any given time each hard drive will have two copies of every VM, not counting the one that's actually running.
Now... here's where the full system backup comes in: because it's a virtual machine, it's only a single 20GB file. Backing it up is as easy as shutting down the VM and copying the file. Recovering from a backup is where it gets even easier... all I have to do is copy that one file back, and start it up. Poof. *everything* is back the way it was at the time of the backup. Total time to recover? Less than a minute.
And the host OS is easy to rebuild, too, because there's no configuration files to worry about. SSH and KVM are the only services the host is running, and for the most part an out of the box configuration for most Linux distributions will handle it quite nicely.
So... I guess to answer your question... in my case a complete system backup makes administering, and recovering from "oh shit" moments a hell of a lot easier.
Seagate = 5 yrs, not 3. (Score:2, Informative)
Re:What MTBF is for. (Score:3, Informative)
Let's say you buy quality SAS drives for your servers and SAN. They're Enterprise grade, so they have a MTBF of 1 million hours. Your servers and SAN have a total of 500 disks between them all. How many many drives should you expect to fail each year?
IIRC, this is the calculation:
1 year = 365 days x 24 hours = 8760 hours per year
500 disks * 8760 hours per year = 4,380,000 disk-hours per year
4,380,000 disk-hours per year / 1,000,000 hours per disk failure = 4.3 disk falures per year
So a 500 disk server farm should expect 4-5 disk failures annually.
Re:Never had a drive *not* fail. (Score:3, Informative)