Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Data Storage

HDD Average Life Span Misses 3-Year Mark In Study of 2,007 Defective Drives (arstechnica.com) 64

An anonymous reader quotes a report from Ars Technica: An analysis of 2,007 damaged or defective hard disk drives (HDDs) has led a data recovery firm to conclude that "in general, old drives seem more durable and resilient than new drives." The statement comes from a Los Angeles-headquartered HDD, SSD, and RAID data recovery firm aptly named Secure Data Recovery that has been in business since 2007 and claims to have resolved more than 100,000 cases. It studied the HDDs it received in 2022. "Most" of those drives were 40GB to 10TB, according to a blog post by Secure Data Recovery spotted by Blocks & Files on Thursday.

Secure Data Recovery's March 8 post broke down the HDDs it received by engineer-verified "power-on hours," or the total amount of time the drive was functional, starting from when its owner began using it and ending when the device arrived at Secure Data Recovery. The firm also determined the drives' current pending sector count, depicting "the number of damaged or unusable sectors the hard drive developed during routine read-and-write operations." The company's data doesn't include HDDs that endured non-predictable failures or damage by unexpected events, such as electrical surges, malware, natural disasters, and "accidental mishandling," the company said.

Among the sample, 936 drives are from Western Digital, 559 come from Seagate, 211 are Hitachi brand, 151 are Toshiba's, 123 are Samsung's, and there are 27 Maxtor drives. Notably, 74.5 percent of the HDDs came from either Western Digital or Seagate, which Secure Data Recovery noted accounted for 80 percent of hard drive shipments in 2021, citing Digital Storage Technology Newsletter data shared by Forbes. The average time before failure among the sample size was 2 years and 10 months, and the 2,007 defective HDDs had an average of 1,548 bad sectors. "While 1,548 bad sectors out of hundreds of millions or even billions of disk subdivisions might seem minuscule, the rate of development often increases, and the risk of data corruption multiplies," the blog said.
"We found that the five most durable and resilient hard drives from each manufacturer were made before 2015," says Secure Data Recovery. "On the other hand, most of the least durable and resilient hard drives from each manufacturer were made after 2015." One of the reasons for this may have to do with HDD manufacturers "pushing the performance envelope," adds Ars. "This includes size limits that cut 'allowance between moving parts, appearing to affect mechanical damage and wear resistance.'"

Secure Data Recovery also believes that shingled magnetic recording (SMR) impacts HDD reliability, as the disks place components under "more stress."

"What this study shows is not the average working life of a hard disk drive," notes Blacks & Files. "Instead it provides the average working life if a failed disk drive. Cloud storage provider Backblaze issues statistics about the working life of its disk drive fleet and its numbers are quite different." A recent report of theirs found that SSDs are more reliable than HDDs.
This discussion has been archived. No new comments can be posted.

HDD Average Life Span Misses 3-Year Mark In Study of 2,007 Defective Drives

Comments Filter:
  • Selection bias (Score:5, Interesting)

    by CmdrPorno ( 115048 ) on Monday March 20, 2023 @11:36PM (#63386711)

    Would there not be a tremendous amount of selection bias inherent in the demographics of people seeking data recovery services, which are not inexpensive? If my 500 GB spinning platter HD fails, it's unlikely to have irreplaceable data on it that hasn't been fully backed up, and I'm unlikely to seek data recovery for it.

    • Re: Selection bias (Score:5, Informative)

      by NagrothAgain ( 4130865 ) on Tuesday March 21, 2023 @12:09AM (#63386755)
      Bingo. The entire "study" is a textbook exercise in bad assumptions in bad data analysis. There's so many problematic assumptions I don't even know where to start poiting and laughing at them.
    • Smirnov,

      In total agreement. Not a plug, but I've seen that Backblaze provides great deets on their luck with hard disks AT SCALE. I haven't peeked there in years. I wonder what their data show?

    • Their data is also unlikely to include devices that are still working, so of course it's going to trend towards poor longevity.
      • Their data is also unlikely to include devices that are still working, so of course it's going to trend towards poor longevity.

        Not necessarily. The data is not about the average lifetime but the average age of the failed disks.

        Just as an example, you buy 1000 WD drives and 1000 Seagate drives. Two WD HDDs fail after a year and the other 998 continue to work for many years. 40 Seagate drives fail after two years and the other 960 continue to work.

        By the metrics of TFA, the WD HDDs are worse since the failed drives were a year old when they failed. Seagate is better because the failed Seagate drives failed after an average of two y

    • by AmiMoJo ( 196126 )

      It does raise an interesting point though. With modern drives in the 20TB range, backup becomes difficult, so people are probably using data recovery services more than ever.

      There aren't any good, affordable backup solutions for that amount of data. External drives are a fail because they require manual intervention, which inevitably will be forgotten. Same with tapes, not that tapes are affordable or easy to use. Cloud services aren't cheap for that much data.

      FWIW I use Duplicati and Jottacloud, but Jottac

      • by slaker ( 53818 )

        I realize this isn't a highly palatable suggestion to a lot of people, but LTO is stable long term and can scale to the amount of data present. Older drives are affordable on Ebay and tape drives are relatively easy to maintain, even if you wind up getting two or three of them.

        Technically, yes, a bunch of 4TB enterprise drives is similar on a cost basis, but I don't have as much long-term confidence in that stack of old drives as I do tapes.

        I managed to get a new-in-box LTO5 changer (that somebody undoubted

        • by tlhIngan ( 30335 )

          I realize this isn't a highly palatable suggestion to a lot of people, but LTO is stable long term and can scale to the amount of data present. Older drives are affordable on Ebay and tape drives are relatively easy to maintain, even if you wind up getting two or three of them.

          Given the prevalence of consumer grade 10TB+ drives that are affordable (especially on sale - $300 or less), LTO backups are horrendously expensive. LTO 7 does 8/16GB (depending on compression), and LTO 8 is probably the reasonable su

          • by slaker ( 53818 )

            The guy who owns the datacenter I use sells me 12 - 16TB enterprise drives for $150/each. They have a couple years in power on hours, but I'd trust those over any model of consumer drive. Even so, most people don't have setups that allow for enough internal drives to keep even a half dozen internal 3.5" drives, let alone what I have running at home.

            There is DEFINITELY a sweet spot in affordability with tape systems. I'd advocate for an auto-loader if there's one cheap and available, but very few end users h

      • Honestly for home use, most of that will be downloaded files that can be redownloaded.

        There are exceptions, of course, but that is the bulk.

    • Another amusing detail - 80% of the drives died due to some external event. Only 20% was "manufacturing quality" (or lack of it).

    • I can buy a decent 500 GB SSD for $25 from a reputable seller. I Don't think anyone is buying 500 GB platter drives anymore.
      • by Reziac ( 43301 ) *

        I buy smallish (160gb to 500gb) used laptop HDDs to use as boot drives for my random collection of not-everyday-OSs (since I no longer dual boot, for a variety of reasons). They cost around $5 and usually have less than 5000 hours on them.

  • by omnichad ( 1198475 ) on Monday March 20, 2023 @11:39PM (#63386715) Homepage

    I don't disagree with the premise the newer drives are becoming more unreliable. That's about the only part of the story that adds up.

    How does a data recovery company's point-in-time statistics have any bearing on reality? All of the 2021 drives they received for recovery are only two years old. You don't say!

    And yeah, 15 year old drives that come in have survived for 15 years without any failures. Amazing. Because if they failed sooner or have been taken out of service they wouldn't be getting sent for recovery now.

    • Re: (Score:3, Informative)

      by CaptQuark ( 2706165 )

      This whole article is missing context. The headline should read "If HDD fails happen, it is usually in the first three years".

      The article gives data on 2,007 failed HDDs that were analyzed out of the 835 million HDDs that have shipped in the last three years. https://www.statista.com/stati... [statista.com]

      The failed drives analyzed by Secure Data Recovery only amount to 0.00024% of the drives shipped in the last three years and are so few in numbers that trying to make an industry-wide generalization is statistically i

      • by ShanghaiBill ( 739463 ) on Tuesday March 21, 2023 @06:36AM (#63387045)

        The headline should read "If HDD fails happen, it is usually in the first three years".

        Actually, it should read, "If your failed HDD is less than three years old, then it probably failed in its first three years."

        • The headline should read "If HDD fails happen, it is usually in the first three years".

          Actually, it should read, "If your failed HDD is less than three years old, then it probably failed in its first three years."

          Wha? I was too busy Facebook-planted in my mobile to understand what you said. Three years is a time for my smart phone to fail? What? My pictures are on the cloud so the hard drive in my phone is more than cloud years young. If I spent less time on Face....... *stares back at phone*
          *sigh*

        • by Askmum ( 1038780 )
          The average lifespan of a working HDD is 10 years.

          The average lifespan of a failed HDD is 5 years.

          The average lifespan of a failed HDD that is sent to some random datarecovery company is 3 years.

          Correct data is more likely to come from large datacenters [backblaze.com] than from datarecovery companies.

      • But even still, that's just statistical selection bias based on what drives are most likely to still be in service. They don't even see the good drives. It really means, "if a drive fails in the first three years, it's because you're still using it."

        Backblaze statistics actually mean something and they have a relatively small sample too. They're just using a very harsh environment. They are the very least don't only look at the failed drives.

      • by Ken D ( 100098 )

        Oh Em Gee!

        They've rediscovered the Bathtub Curve.

    • That's about the only part of the story that adds up.

      But does it actually? I mean it feels right, more advanced device, more complex, high chance of failure right? But the same can be said of every HDD since the very first one, and demonstrably there's not been any change of significance in reliability over the past 3 decades.

      I'll revert to backblaze's statistics which generally show their annualised failure rate are holding steady around the 1.3% +/-0.4% for as far back as I could find their statistics.

      It feels like advanced things should fail more, but ther

      • Backblaze doesn't even use SMR drives, for one. Those large capacity drives in constant service seem to do better than consumer desktop drives. I see them failing in far greater numbers since SSDs became cheap enough. It means systems with HDDs are using bottom-tier parts more than ever. The fact that there are more reliable drives doesn't matter as much in the real world.

  • Anyone else notice the very last line states that the bold headline is bullshit? That seems to be pretty standard on Slashdot, and a lot of media generally.

    Absolutely false headline, which is all most people ever see. Then at the bottom of the story they mention the reality so the author can claim they weren't full of shit.

  • I can relate to this (Score:5, Interesting)

    by RitchCraft ( 6454710 ) on Monday March 20, 2023 @11:54PM (#63386737)
    I have LOTS of IDE and SATA hard drives from all manufacturers ranging from 250MB to 1TB made between 1993 and 2012. Most of them still work just fine (except for anything Western Digital Green, they were always junk). I find that drives after 2012 barely make it to 5 years, and those in the past few years barely make it past their warranty period. The best I've encountered ... Western Digital Black 1TB SATA drives manufactured between 2008 and 2012. Those drives are beasts!
    • Re: (Score:2, Interesting)

      by thegarbz ( 1787294 )

      Cool anecdotes. On the flip side data from Backblaze shows the failure rates have not in any significant way changed over the past decade.

      Heck based on my own personal experience I have 100% reliability from every drive I purchased in the past 7 years, and woeful experience prior to that. So my sample size shows that drives are now perfect and will never fail.

    • Wow. You have 30 year old drives that work? I didn't think hard drives could survive that long
      • by Anonymous Coward
        I have a 41.1GB IBM Deskstar from "SEP-2002" in front of me right now. Has some version of OpenBSD on it and I fire it up once in a while. The sound it makes when searching is...comforting. Like the characters on the terminal. Now leave me alone.
      • I have a few MFM and RLL drives even older that still work, although that is an anomaly. MFM and RLL drives still running today is pure luck. I also have a few 120MB and 170MB Plus Hard Cards (you plug them into an ISA slot) still new in the shrink wrapped boxes. I'm hesitant to break the seal and try them though. Adrian from Adrian's Digital Basement showed how the foam inside them deteriorates over time and causes the heads to get stuck in position with no way to correct this short of tearing the drives a
      • I have several drives that old that still work. I will say anything older than about 7-10 years really is a crapshoot though. My policy for a long time is that I replace any drive that's older than 5 years that's doing anything important (it's not that I don't have backups, but even with backups I'd rather not deal with unexpected drive failures). I'm not always right on the dot with that, but still that's given me a pretty good supply of older drives that were pulled from service while still perfectly f

  • by ctilsie242 ( 4841247 ) on Tuesday March 21, 2023 @12:06AM (#63386747)

    I think one of the reasons why RAID 5 has effectively been sunsetted for RAID 6 at minimum, triple parity RAID (in ZFS terms, RAID-Z3), or a modified RAID 10 where there is a three-way mirror combined with striping, to allow at least two drives in the same vdev to fail before the drive array becomes at risk.

    Maybe even doing with RAID altogether, and using erasure coding on the data end, similar to what Hadoop and MinIO do, where you can throw drives at it that have two sets of heads, and all the RAID heavy lifting is done on the app side, as opposed to the block or filesystem.

    With drive failures rising, there is only so much RAID can do. Perhaps it is time for another low tier storage format, now that ariel density is giving diminishing returns? We have heard about holographic storage since the early 1990s and the days of Tamarak, but have seen zero actual products. Maybe it might be time for storage vendors to stop resting on their laurels with just magnetic storage and go back to the drawing board with other storage modes. Optical might be an answer, especially combined with holographic technology.

  • Not suprising (Score:5, Interesting)

    by wakeboarder ( 2695839 ) on Tuesday March 21, 2023 @12:24AM (#63386761)
    When you fill your drives with helium and have smaller tracks it's only a matter of time
    • Re:Not suprising (Score:5, Informative)

      by ctilsie242 ( 4841247 ) on Tuesday March 21, 2023 @12:33AM (#63386763)

      The tiny atoms of helium are going to leak out anyway, so helium-based platters definitely have an expiration date before they are rendered useless.

      • Re:Not suprising (Score:5, Interesting)

        by thegarbz ( 1787294 ) on Tuesday March 21, 2023 @04:44AM (#63386939)

        Helium atoms don't "leak" they permeate, and permeation rate isn't a function of time but rather partial pressure. For an average undamaged HDD you will not have helium permeation be a limiting factor in their lifetime. Take a hammer to it and crack it open that may be a different discussion.

        I remember this was all a big whohar a few years ago but not at all borne out in the data. I think it was Backblaze at the time showed that after a run time of several years out of 17000 odd drives they had a single drive report that helium wasn't still at 100, and across their failure rates the helium drives were "better" than other drives within the margin of error.

        I.e. don't concern yourself with helium. Your drive will fail for other reasons.

        • What I'm saying is drives are more complex than the simple harddrives of yestaryear
          • Indeed. But that ties into another post of mine where I said every drive has been more complex and required more careful design and tighter tolerances than the one that came prior. But we didn't hit peak HDD reliability in 1956.

            It sounds intuitive that complexity makes something less reliable, but reliability is more complex than that. You can look at the data from Backblaze to get a good insight, their benefit is they've been publishing data for a while. Currently they are experiencing a ~1.5% annualised f

    • I guarantee you they aren't talking about helium filled drives. Not with the figures and drive sizes they are reporting. If you want to check out helium vs air filled drives then consider https://www.backblaze.com/blog... [backblaze.com]

  • by Anonymous Coward

    Talk about "survivor bias"... lolmorons

    Anyway, I can say for sure than SMR drives are shit. I have piles of dead ones. Not only that but they fail in really bad ways. Like they keep "working" but when you read data back out it's all corrupted. Total garbage technology.

  • well they want the hard drives to "expire" early so they could sell more. Way of life.
    • by Creepy ( 93888 )

      I don't think so - IBM drives were notorious for dying within 3 years, in fact, I had like 20 replaced under warranty (all backed up on tape), then they were ultra reliable. Seagate had a similar ebb and flow. Western Digital was the most reliable I've had, but had 2 fail after 10 years. I still don't trust Seagate due to people I know that have had major issues. Not blaming Seagate for being shit, I have like 3 friends that work for them, just people I know had issues. I have like 50 friends/acquaintances

      • It's not just a per unit thing but models too. I would never buy a 3.5 Seagate over WD for a desktop but I have eight 10k Seagates in a Sparc server from 2018 that are holding up just fine, and they were used when I bought them.
        This wouldn't be a problem if corporations and governments didn't think they were entitled to record every fucking moment of your life.
    • by v1 ( 525388 )

      I wonder about that... when a product you have dies and you're thinking about getting another, you're going to mull whether to get the same brand again or try something new. With hard drives, there can be a very painful association with losing data on a drive when it fails, and that would seem to negatively bias a person's view of the brand. Maybe hard drives can't just rely on brand loyalty for a replacement, and need to provide noticeably good service to offset the bad taste a consumer gets when they l

  • ...probably weren't taken in to a data recovery service. Would you get accurate cancer rate figures by only surveying people who went in for cancer treatments? I feel it's also important to mention that when hard drives do fail, they tend to do so slowly. Pending sectors start to build up, etc. You get warning signs that allow you to backup your data in most cases. That is in stark contrast to SSDs when they fail, where they will be working great one day and then the next day they are simply dead along
  • isn't the average life span of hard disks.
  • "Most" of those drives were 40GB to 10TB

    I honestly spat my coffee on my screen just now. That statement is like saying "most cars have wheels".

  • Thank you for reminding me that it's time to rotate my home backup drive. Looks like the drive I want is at a fairly reasonable price at my local electronics retailer. I'll put it on my weekend to-do list.
  • To me what's missing is the population of drives still in operation from before 2015. Sure you could argue that cramming 2 to 4 times the storage capacity in the same form factor may lead to reliability issues down the road but are we talking about consumer or data center grade drives as well? Of the drives that showed failures how many more are still working or have they been retired?

    This sounds more like a whitepaper about "new drives are bad, use us if you don't have a backup strategy or don't know what

    • To me what's missing is the population of drives still in operation from before 2015. Sure you could argue that cramming 2 to 4 times the storage capacity in the same form factor may lead to reliability issues down the road but are we talking about consumer or data center grade drives as well? Of the drives that showed failures how many more are still working or have they been retired?

      This sounds more like a whitepaper about "new drives are bad, use us if you don't have a backup strategy or don't know what RAID or JOBD is."

      Shh!!! You're not supposed to break the fun of analytic arguments by pointing out that the source of info is probably a suggestive sales vector. Where's the fun in arguing that? ;)

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...