Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage

Backblaze Hard Drive Stats for 2017 (backblaze.com) 93

BackBlaze is back with its hard drive reliability report. From the blog post: Beginning in April 2013, Backblaze has recorded and saved daily hard drive statistics from the drives in our data centers. Each entry consists of the date, manufacturer, model, serial number, status (operational or failed), and all of the SMART attributes reported by that drive. As of the end of 2017, there are about 88 million entries totaling 23 GB of data. At the end of 2017 we had 93,240 spinning hard drives. Of that number, there were 1,935 boot drives and 91,305 data drives. This post looks at the hard drive statistics of the data drives we monitor. We'll review the stats for Q4 2017, all of 2017, and the lifetime statistics for all of the drives Backblaze has used in our cloud storage data centers since we started keeping track.
This discussion has been archived. No new comments can be posted.

Backblaze Hard Drive Stats for 2017

Comments Filter:
  • Bottom line (Score:5, Insightful)

    by ArchieBunker ( 132337 ) on Thursday February 01, 2018 @02:19PM (#56048705)

    Seagate is garbage and cheap while HGST is better and more expensive. WD falls in the middle. Price be GB has not fallen in a long time either. I'm out of space and always wonder about saving $90 by shucking a WD EasyStore or paying for HGST.

    • Re:Bottom line (Score:5, Interesting)

      by slaker ( 53818 ) on Thursday February 01, 2018 @02:37PM (#56048841)

      It looks to me like everyone has cleaned up their act. I'm willing to accept a 1% - 2% annualized failure rate for (mostly) consumer drives. It wasn't all that long ago that I thought anything under 5% was doing pretty well. I'm interested to see how the trends for the 8TB+ units play out, but it doesn't look like there are any obvious crap products any longer.

    • by lgw ( 121541 )

      Yeah, Seagate certainly did not cover themselves with glory here. I'm disappointed by how far WD has fallen, but the writing was on the wall with the last couple years' reports. Glad I read those and moved to HGST.

      • by Anonymous Coward

        Yeah, Seagate certainly did not cover themselves with glory here. I'm disappointed by how far WD has fallen, but the writing was on the wall with the last couple years' reports. Glad I read those and moved to HGST.

        I'm not a Seagate fanboi or anything but did you even look past that first chart or are you just purely talking out of your ass? With exception of the two obvious outliers that don't have enough drives to be statistically relevant, Seagate averaged a 1% higher failure rate than WD. That's basically meaningless, even to data centers. All the manufacturers are basically within 2% of each other, which is about the best you're going to find for any consumer product.

        I have 15 HGST 4TB drives and have had 2 fail

    • I'm curious how Seagate screwed things up so badly. They bought Samsung's HDD division some years ago and I found that Samsung produced some incredibly reliable drives (I've actually still got a few running in older machines that have been going for over a decade at this point) for that time period.

      I also remember a time when Seagate was thought of as one of the more reliable brands, at least compared to some other ones (Maxtor) that had burned a lot of people I knew. I think Seagate also bought them at
      • by tlhIngan ( 30335 )

        I'm curious how Seagate screwed things up so badly. They bought Samsung's HDD division some years ago and I found that Samsung produced some incredibly reliable drives (I've actually still got a few running in older machines that have been going for over a decade at this point) for that time period.

        I also remember a time when Seagate was thought of as one of the more reliable brands, at least compared to some other ones (Maxtor) that had burned a lot of people I knew. I think Seagate also bought them at any

        • I think you might be overstating a bit just how cheap Seagate drives are. I recently bought a HGST NAS drive and it was like 5% more expensive than a similarly-specced Seagate. This seems to roughly hold true when looking and comparable lines, though obviously the shingle archive drivers would be much cheaper than the high performance NAS stuff.

        • I admit I buy HGST on sale, but where the fuck do you live where HGST is even 50% more expensive? Are you comparing HGST NAS models to Seagate desktop models? That's the only way to explain almost double pricing. Even then, I think a 4TB desktop went for $99 and I bought several 4TB HGST NAS for 124.99 each over Black Friday. Just buy your spares on sale. Then you can hold out for 3-5 year warranty NAS drives.
      • by Gr8Apes ( 679165 )

        I'm curious how Seagate screwed things up so badly. They bought Samsung's HDD division some years ago and I found that Samsung produced some incredibly reliable drives

        Probably because they shutdown Samsung? Buying a rival and all.

    • Re:Bottom line (Score:5, Informative)

      by omfglearntoplay ( 1163771 ) on Thursday February 01, 2018 @03:02PM (#56049013)

      HGST is a subsidiary of Western Digital.

      https://en.wikipedia.org/wiki/... [wikipedia.org]

      So like... what is going on here with WD if their HGST line is so much better than their regular line? Also, I hope people are being careful of the crazy rates for Q4, because they don't mean what they appear to mean at a surface level. Quoting the article:

      "Quarterly failure rates can be volatile, especially for models that have a small number of drives and/or a small number of drive days. For example, the Seagate 4 TB drive, model ST4000DM005, has a annualized failure rate of 29.08%, but that is based on only 1,255 drive days and 1 (one) drive failure."

      • Re:Bottom line (Score:4, Interesting)

        by slaker ( 53818 ) on Thursday February 01, 2018 @03:07PM (#56049059)

        A WD Employee I know told me that their manufacturing and development processes for WD and HGST have retained their distinct identities, at least as of 2016. Maybe it's too expensive for WD to switch to the HGST ways of doing things?

        • by AmiMoJo ( 196126 )

          This is indeed true. HGST R&D is still done in Japan, and their target markets that will pay slightly more for increased reliability and durability (servers, appliances like DVRs etc.)

          WD is more consumer focused.

      • "Quarterly failure rates can be volatile, especially for models that have a small number of drives and/or a small number of drive days. For example, the Seagate 4 TB drive, model ST4000DM005, has a annualized failure rate of 29.08%, but that is based on only 1,255 drive days and 1 (one) drive failure."

        Yes, the naive will assume that the stated failure rates are gospel. However, the real truth is that the Backblaze reported numbers are sampled estimates that are a combination of the intrinsic reliability of the drive and the operating environment and workloads. It is not clear how well their failure rate estimates translate to other environments. Cooling systems, vibration mitigation, duty cycles, etc. are significant.

        One thing that Backblaze could do to impart some robustness to their numbers is to pr

        • One thing that Backblaze could do to impart some robustness to their numbers is to provide statistical confidence intervals along with the single estimators.

          Something like this chart, you mean:
          https://www.backblaze.com/blog... [backblaze.com]

          • One thing that Backblaze could do to impart some robustness to their numbers is to provide statistical confidence intervals along with the single estimators.

            Something like this chart, you mean:
            https://www.backblaze.com/blog... [backblaze.com]

            Yes, exactly, but actually attached to all tables/charts and not just a few. It's not an accident that the small sample size tables don't have confidence intervals. Those are the tables that need them the most to indicate that the estimated values should be taken with a huge block of salt.

        • There's a big benefit to Backblaze for publishing these yearly stats: this ensures their drive purchases will be the cream of the crop, reducing their replacement costs.

          I believe this concern outweighs those (valid) concerns you listed.

          • There's a big benefit to Backblaze for publishing these yearly stats: this ensures their drive purchases will be the cream of the crop, reducing their replacement costs.

            I believe this concern outweighs those (valid) concerns you listed.

            I totally agree about the value of the availability of this data. I applaud Backblaze for releasing the numbers and actually attaching manufacturer names and models to the numbers.

            • I meant a far more cynical explanation than how you understood my comment. Cynical for drive manufacturers, that is.

      • All the disks that Backblaze has installed in their servers that are HGST are old disks, purchased before WD even bought Hitachi. None of the HGST models they mention are currently even purchasable.

      • So like... what is going on here with WD if their HGST line is so much better than their regular line?

        HGST has been kept at arms length since their purchase by WD. They may share a corporate financial statement but effectively they remain two separate companies in most of the ways that matter.

    • Only two of the Seagate drive models (ST400DM001, ST400DM005) had excessively high failure rates, and the worst one (ST400DM005) had been in use the shortest time of all drive models in the report by far and suffered a single failure. The confidence interval chart shows this - the low end of the confidence interval of that model is 0.0% - meaning for all we know it could be the most reliable drive in the report, it just had the misfortune of a random failure soon after they began using it.

      Subtract those
      • If you look further back in history (https://www.backblaze.com/blog/hard-drive-reliability-update-september-2014/) you will see that Seagate had miserable results with early 'large' drives (1.5, 2 TB.) I found this unfortunate as I had really good luck with a bunch of 200GB Seagate barracudas I bought for my first RAID setup. I suppose the troublesome drives are based on the same or similar designs and share a flaw that causes high failure rates. I purchased two 2TB Barracudas and both eventually failed. An

      • by AmiMoJo ( 196126 )

        The most important lesson is: back your data up or suffer data loss every 5-10 years.

        I'm sure Backblaze would like you to use their service, but I prefer SpiderOak.

    • I have been buying exclusively HGST hard drives for a while now, because they're not actually that much more expensive, and as BackBlaze's numbers show, they're by far the most reliable.

      For SSDs, I guess Samsung is still king, if we ignore the debacle with the 840 Evo and stick to the Pro versions.

  • by Anonymous Coward

    For like the 5th year in a row HGST has the lowest failure rates.

    For personal use it's clear HGST is the way to go. Sure you pay a few extra bucks, but I don't have a massive data farm with redundant data spread out all over the place.I can't afford multiple drive failures.

  • by dkone ( 457398 ) on Thursday February 01, 2018 @02:41PM (#56048883)

    I have used nothing but HGST drives for all the machines I have built, including NAS's, for as long as I can remember. This is an awesome study and I am sure it probably has some peeps at seagate steaming right about now.

    • by umafuckit ( 2980809 ) on Thursday February 01, 2018 @03:00PM (#56048993)
      The only time I tried HGST was when I bought half a dozen of them for a trial. One failed out of the box and another lasted a week. Clearly I got unlucky, but it didn't encourage me to repeat the experience. I generally buy WD because the failure rate is acceptable and there is a good return policy which is easy to use.
      • Yes experience with devices that have generally low failure rates is often based on luck. Two in one box? Did you check the box for damages in the corners? I'll bet you a dollar the box was dropped or worse, handled by UPS.

        I have had the same problem with WD. I have 8 identical model 4TB HDDs in my system. 2 of them failed out of the box with nothing but a click on bootup, and after they were replaced the entire set hadn't missed a beat.

        I'm very willing to accept infant mortality providing the random failur

        • No obvious damage, which was odd. Otherwise I'd have returned the whole box and started again.
          • I actually wonder if anyone actually puts their HDDs through a S.M.A.R.T. test when they first get them. The "Conveyance" self test is specifically designed to detect damage during shipping. Although, I'm not quite sure what it does or if it is useful.

            • I didn't know about that. Interesting. I'm about to move a bunch of servers which together comprise of over a PB of storage. I'll look into running that test on the other end.
    • Yes, HGST drive are the most reliable, but they're also very expensive.

      It's ironic that when RAID was invented it stood for Redundant Array of Inexpensive Disks. This was later revised to stand for Redundant Array of Independent Disks.

      • by Strider- ( 39683 )

        Oddly, when I was shopping for 8TB drives for my NAS recently, the HGST 7200rpm Helium drives were cheaper than the equivalent WD Reds or Seagates. So yeah, they're not always the most expensive out there. I think I paid $250 CAD each for the drives. The WDs and Seagates were about $15 more. So yeah, HGST isn't always the most expensive, especially if you shop.

        If you're shucking drives, that's another matter.

        • It's because you shopped for NAS-specific drives. Seagate often makes much cheaper non-NAS drives.
          Still, $250 is good, it's now about $300.

          • by Strider- ( 39683 )

            The only cheaper ones I found were the archive drives, which use SMR, which is fine for backups and read-mostly workloads, but not really suitable for random access. Their normal drives were about the same as the NAS drives.

            • maybe this has changed, but it clearly wasn't the case when they first launched "NAS" and "RAID" drives.

    • While I can attest to just how bad seagate's DM-series drives actually are (I had a dozen or so 3TB DM-series myself, and they typically died in less than a year). That said, the DM-series isn't meant to be used in that way. They aren't designed to put in a box connector facing down, with little heat dissipation and with high vibration. It does appear the 8TB DM model actually holds up really well, even in this environment
      .
      The NM-series (Enterprise class) did fair above average. I just wish that backbla

    • by dj245 ( 732906 )

      I have used nothing but HGST drives for all the machines I have built, including NAS's, for as long as I can remember. This is an awesome study and I am sure it probably has some peeps at seagate steaming right about now.

      Why? Backblaze is still buying palletfuls of Seagate drives based on their drive counts of the 12TB and 10TB drives. I believe it was explained last year that the amortized $/operating year was lower that other brands, even with the increased failure rate.

      Making that kind of decision depends on how tolerant of failure the purchaser is, the cost of replacement, and how many drives they are purchasing. Some large storage companies don't even bother replacing failed drives, they just disable them.

  • by BrookHarty ( 9119 ) on Thursday February 01, 2018 @03:01PM (#56049005) Journal

    Wondering if any other storage company releases their HD failure rates?

  • by BenJeremy ( 181303 ) on Thursday February 01, 2018 @03:25PM (#56049195)

    I have them. 20+ dead Seagates... internals and externals. Only 2 drives in the past 10 years have survived... yet I have no dead Hitachis, one dead Samsung and a couple dead WDs.

    Seagate and Maxtor merging combined the worst of both companies into one terrible behemoth.

    Also, drive prices still suck. The floods in Thailand were an excuse to gouge customers as insurance companies funded the construction of shiny new plants capable of producing 10+TB drives as fast and as cheaply as they had been churning out 2TB drives (for around $45 - 7 years ago!). We should be getting 10TB drives for $50 by now.

    • I have them. 20+ dead Seagates... internals and externals. Only 2 drives in the past 10 years have survived... yet I have no dead Hitachis, one dead Samsung and a couple dead WDs.

      The problem with anecdotes is you need a lot of them to separate statistics from sheer dumb luck. While Segates are generally shit I have 3 of them (2 in RAID 1 and one standalone). The standalone one is in this computer from which I'm typing and has 9 years of power on time on it. The 2 in the RAID array are 7 and 8 years. Respectively.

      Mind you I also seem to own the last remaining OCZ Vertex 3 that hasn't died, and it's been plodding along for a good 7 years. Maybe storage loves me, or maybe the HDD is a

      • I have them. 20+ dead Seagates... internals and externals. Only 2 drives in the past 10 years have survived... yet I have no dead Hitachis, one dead Samsung and a couple dead WDs.

        The problem with anecdotes is you need a lot of them to separate statistics from sheer dumb luck. While Segates are generally shit

        This is true. In this case though, I have seen a lot of anecdotal evidence of Seagate drives being crap.

        I also have my own anecdotes on the subject. Seagate drives I have had all had abysmally short lives. I never got a big stack of dead Seagate's though, as I stopped buying them. There were some duds at work too, and I told them to stop buying them too.

      • I end up working on a few friend's/family member's computers, so I've seen my fair share of dead and dying Seagate drives.

        However, my anecdote is I bought no less than four of the infamous Seagate 1.5 TB drives back when they first came out. I still have all four, and despite being nearly a decade old, they all still work and have never given me a lick of trouble. I don't use them for anything important anymore, but I'm still using them.

        • Oh I agree and frankly the wider data reflects that. I'm just saying to take care when using anecdotes. By my own anecdote I would say Seagate is more reliable then WD, but people's anecdotes (including mine) have such low sample sizes that you may as well flip coins.

          Hell in my household we have 100% success rate with OCZ SSDs. Frankly I have no idea why they went out of business :-)

  • Seagate SG4000 series life expectancy: 32 years
    Average HD life expectancy: 50 years
    HGST HDS5C series: 167 years

    • Seagate SG4000 series life expectancy: 32 years Average HD life expectancy: 50 years HGST HDS5C series: 167 years

      I have always felt like there's a huge disconnect between the advertised life expectancy of products and reality. It seems wrong that they can say that a product will last a long time, yet the short warranty suggests that they are aware that their claims are full of crap.

      I hate to say "there oughta be a law", but as a consumer protection, I think that there should be a requirement to have a statement about the expected lifespan of the product, along with a statement about the warranty. This should apply to

  • Can someone explain to me how the Seagate ST4000DM005, of which they had sixty running and a single failure in a quarter, equate to a massive 29.08% annualized failure rate?

    They make an attempt to explain that case at the bottom of the page but it makes no sense to me. With a single failure causing such massive spikes I'd be leaving them off as "insufficient data" or at least introducing some error bars.

    • by edwdig ( 47888 )

      They explain that the report includes any drive they have at least 45 of in use. They don't say why that's the cutoff, but they do point out that the stats aren't meaningful with the low numbers of drives.

      Those 60 drives were running for an average of 21 days each. One drive failed in that time. (1 failure / 60 drives) * (365 days/year / 21 days) = 29% drives fail yearly

      It's not enough data to conclude anything. They just started deploying a new model and one drive died immediately, which looks awful if you

  • I see that Seagate continues being a piece of shit drive, and quite unfortunately the only one at reasonable price that I can find in the local market.
    Oh well...

  • by HuguesT ( 84078 ) on Thursday February 01, 2018 @05:35PM (#56050429)

    Backblaze is a backup service company. Basically, all they do with their drives is put them up in a bespoke cabinet, slowly fill them up with data at internet speed, then let them running for a long time doing hardly anything at all. Infrequently, when someone loses some data somehere, they read a small portion of them. This is very far from what most people do with their drives. In particular read/write performance and reliability does not matter to Backblaze.

    • This is very far from what most people do with their drives.

      Define most. It's becoming incredibly bloody common for someone to have a small NAS unit somewhere in their house. Spinning rust is basically relegated to these kinds of services now where SSDs form the core part of your day to day computing.

    • by sl3xd ( 111641 )

      Infrequently, when someone loses some data somehere, they read a small portion of them

      Checksumming filesystems (WAFL, ZFS, etc.) are the standard for large arrays, and it's pretty foolish to not run a scrub operation regularly -- at least weekly, possibly more often than that.

      If they're doing their job, reads will outnumber writes by several orders of magnitude.

  • This is neat and all, but I didn't see a mention of how the data is normalized in time. Just blindly comparing how many of a particular drive failed any given quarter is misleading at best. Their failures must be normalized in time. i.e. the failure rate must be scaled by the amount of time in service.

    This is usually specified as FIT (failures in time). It makes no sense to directly compare a batch of drives that might have only been in service 1 year with ones that might have been in service for 1 day.
  • ...that I most look forward to on Slashdot. I wish they'd also publish more stuff on SSD torture testing / failure rates.

    • As far as I can tell, you can expect at least 500TB of writes to any decent SSD you buy today, which is way beyond what the drive lifetime counter in SMART tells you. Some of them soldier on for much longer than that.

      I've been using a Samsung 840 Evo (yes, the shitty ones that needed firmware patches to not eat themselves over time) as my system drive sinec 2013, and it just keeps on trucking. I have a swap partition on it, I haven't bothered moving the /var partition to a HDD to save on writes, I actually

We are each entitled to our own opinion, but no one is entitled to his own facts. -- Patrick Moynihan

Working...