Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Data Storage

Who Makes the Best Hard Disk Drives? 444

Hamsterdan writes "Backblaze, the cloud backup company who open sourced their Storage Pod a few years ago, is now providing information on drive failure rates. They currently have over 27,000 consumer grade drives spinning in Backblaze storage pods. There are over 12,000 drives each from Seagate and Hitachi, and close to 3,000 from Western Digital (plus a too-small-for-statistical-reporting smattering of Toshiba and Samsung drives). One cool thing: Backblaze buys drives the way you and I do: they get the cheapest consumer-grade drives that will work. Their workload is almost one hundred percent write. Because they spread the incoming writes over several drives, their workload isn't overly performance intensive, either. Their results: Hitachi has the lowest overall failure rate (3.1% over three years). Western Digital has a slightly higher rate (5.2%), but the drives that fail tend to do so very early. Seagate drives fail much more often — 26.5% are dead by the three-year mark."
This discussion has been archived. No new comments can be posted.

Who Makes the Best Hard Disk Drives?

Comments Filter:
  • by t0qer ( 230538 ) on Tuesday January 21, 2014 @07:00PM (#46030353) Homepage Journal

    I remember when WD caviar drives were the most replaced component on systems I serviced. Seagate was the top contender with their SCSI 10krpm drives.

    • by ZenMatrix ( 1299517 ) on Tuesday January 21, 2014 @07:04PM (#46030407)
      Seagate drives are terrible drives now. I've had three of there external drives not last more then a year.
      • by QRDeNameland ( 873957 ) on Tuesday January 21, 2014 @07:10PM (#46030443)

        Seagate drives are terrible drives now. I've had three of there external drives not last more then a year.

        Agree, I bought 3 2TB Seagates for my home server a few years back...2 of them failed within a year. Yet another brand name I used to trust, now shot to shit.

        • by ackthpt ( 218170 ) on Tuesday January 21, 2014 @07:30PM (#46030627) Homepage Journal

          Seagate drives are terrible drives now. I've had three of there external drives not last more then a year.

          Agree, I bought 3 2TB Seagates for my home server a few years back...2 of them failed within a year. Yet another brand name I used to trust, now shot to shit.

          This is why you just buy whatever is cheap and rig up a RAID 5. A drive craps out and you throw another one in and keep on going.

          • by QRDeNameland ( 873957 ) on Tuesday January 21, 2014 @07:51PM (#46030851)

            This is why you just buy whatever is cheap and rig up a RAID 5. A drive craps out and you throw another one in and keep on going.

            That's exactly what I did...note I did not claim to have lost any data when the drives failed. The point is that when you have a 66% failure rate on brand new drives within a year, you start reconsidering your choice of vendor, no?

          • by brianwski ( 2401184 ) on Tuesday January 21, 2014 @08:02PM (#46030985) Homepage
            Personally, I'd really recommend RAID6 with at least 2 parity drives. But always remember, RAID is *NOT* backup. RAID doesn't protect against user stupidity like backup does. RAID does not protect against theft. You don't have to use Backblaze for backups, but for goodness sake USE SOMETHING.
          • by Lawrence_Bird ( 67278 ) on Wednesday January 22, 2014 @12:21AM (#46032683) Homepage

            DO NOT buy 5 identical drives at the same time from the same place and same manufacturer or face increased risk of more than one dying at (or near) the same time.

            Also keep in mind that raid will not protect you from data corruption (or more correctly, it will assure that you retain corrupt data). The happiest event is when a drive flat out dies.

          • by WuphonsReach ( 684551 ) on Wednesday January 22, 2014 @01:03AM (#46032837)
            This is why you just buy whatever is cheap and rig up a RAID 5. A drive craps out and you throw another one in and keep on going.

            RAID-5 on a system where you don't think you have high quality drives, a high quality power supply, battery backup (for the RAID card) plus a high quality UPS unit (preferably multiple UPS units hooked up to a set of redundant PSUs inside your case) -- is simply a bad idea. Sooner or later, you *will* lose the array due to a double-drive failure. Oh and make sure that you have plans to swap out drives on a regular basis and have a working backup plan.

            RAID-6 is better, but not by much. It can at least deal with a double drive failure. But performance still goes in the gutter while it's degraded and/or rebuilding.

            One of the more fault-tolerant setups is a 3-way RAID-1 mirror where you can lose 2 of 3 drives without losing data. The downside is that it is only 33% efficient while RAID-6 (1 spare, 2 parity, 5 data) is 62% efficient. A well configured RAID-10 setup also works well, but never gets much above 40-45% space efficiency if you set aside a hot spare for it.

            Main reason why I prefer RAID-10 for larger arrays is that the time to rebuild a failed disk is linear to the size of a single disk within the array (because you have mirror pairs). With RAID-5 / RAID-6 the rebuild time scales with the total size of the array. For a 15-20 drive array, that means RAID-10 could rebuild the failed drive in 1/5 to 1/10 the time of the RAID-5 or RAID-6 array with the same number of spindles.
          • by horza ( 87255 ) on Wednesday January 22, 2014 @04:38AM (#46033673) Homepage

            I have RAID 5 with 4 x 3TB seagate drives. 1 drive failed after a year and the 2nd in the same NAS failed a couple of days later before a replacement could come in the post. So far 3 / 8 Barracuda drives failed in just over 1 year. After just losing 4TB of data, including my entire photo collection, I've sadly realised RAID 5 isn't enough.

            Phillip.

        • by The Grim Reefer ( 1162755 ) on Tuesday January 21, 2014 @11:52PM (#46032547)

          Seagate drives are terrible drives now. I've had three of there external drives not last more then a year.

          Agree, I bought 3 2TB Seagates for my home server a few years back...2 of them failed within a year. Yet another brand name I used to trust, now shot to shit.

          Why? Because you bought three drives from the same batch. Perhaps they had an issue with that version, or the firmware. Hard drives are extremely cyclical in regards to quality. I remember when Micropolis, Seagate, Western Digital, Quantum, IBM, Conner, and a couple of others used to trade spots for being the best and worst drive manufacturers on an almost yearly basis. I've gone through a hell of a lot of drives over the years, and don't really have a favorite brand. I've had many WD and Seagate drives fail over the years, though proportionally more WD drives. Currently I have a bunch of Seagate and WD spinning drives, along with a couple of Samsung's. I am pretty superstitious regarding anything call Desk Star though. I've had at least half a dozen different models of Desk Stars from both IBM and Hitachi fail with little warning.

      • by bennomatic ( 691188 ) on Tuesday January 21, 2014 @07:44PM (#46030761) Homepage
        I'm just kind of amazed that Seagate is still around. I remember some years back, there was a huge fraud scandal where they were claiming huge volumes of unsold inventory to be sold in order to keep their stock price up. They were storing the drives in 18-wheelers and, at night, they were backing the trucks up against each other so that if an investigator wanted to break in, they had to physically move the truck, giving them time to respond. It was crazy.
        • by kriston ( 7886 ) on Tuesday January 21, 2014 @09:23PM (#46031717) Homepage Journal

          Was this a Seagate scandal or actually a MiniScribe scandal (acquired by Maxtor, acquired by Seagate)?

          • by elbonia ( 2452474 ) on Tuesday January 21, 2014 @11:55PM (#46032571)

            It was MiniScribe, from the court documents: [leagle.com]

            In mid-December 1987, Miniscribe's management, with Wiles' approval and Schleibaum's assistance, engaged in an extensive cover-up which included recording the shipment of bricks as in-transit inventory. To implement the plan, Miniscribe employees first rented an empty warehouse in Boulder, Colorado, and procured ten, forty-eight foot exclusive-use trailers. They then purchased 26,000 bricks from the Colorado Brick Company.

            On Saturday, December 18, 1987, Schleibaum, Taranta, Huff, Lorea and others gathered at the warehouse. Wiles did not attend. From early morning to late afternoon, those present loaded the bricks onto pallets, shrink wrapped the pallets, and boxed them. The weight of each brick pallet approximated the weight of a pallet of disk drives. The brick pallets then were loaded onto the trailers and taken to a farm in Larimer County, Colorado.

            Miniscribe's books, however, showed the bricks as in-transit inventory worth approximately $4,000,000. Employees at two of Miniscribe's buyers, CompuAdd and CalAbco, had agreed to refuse fictitious inventory shipments from Miniscribe totalling $4,000,000. Miniscribe then reversed the purported sales and added the fictitious inventory shipments into the company's inventory records.

    • by gigne ( 990887 ) on Tuesday January 21, 2014 @07:28PM (#46030605) Homepage Journal

      I only have a couple of home servers with a total of 24 disks, 50% WD, the rest seagate. Never had to send a WD back. Those Seagate drives fail all the damn time. I have replaced 25% of them in 1.5 years. Sometimes the brand new replacement (as in a new retail drive) fails very quickly; 1-4 months.
      I also refuse to use any of their RMA replacement drives as they seem to go bad within 6 months. Not a single RMA's drive has lasted more than 1 year.
      At this point I am actively migrating data off those RAID arrays onto the new WD drives. I have no faith in seagate.

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      I'm not accusing you here, but I just wanted to point out that everyone has anecdotal evidence suggesting that ${company-A}'s drives suck and that ${company-B}'s drives are awesome. The only problem is that everyone has a different opinion on who companyA and companyB are!

      This is a perfect case study on consumerism and confirmation bias. People swear by a product until it fails, and then they hate it and love the replacement until it fails...

      • by scubamage ( 727538 ) on Tuesday January 21, 2014 @07:42PM (#46030733)
        It's also a study in the law of large numbers. If 10 million people all say seagate's drives suck, there is a very good possibility that seagate's drives do in fact suck.
        • Backblaze is actually quite happy with the Seagate drives: The performance is consistent, and prize is low. The 4TB drives are good, and they want to buy Seagate 15TB drives. They only had trouble with the 1 and 2TB series.

          Hypothetical example: If 50% of the drives fail, but the drives are half as expensive as drives with a 10% fail rate, its better to choose the former. For the same money, you will have 10% more disk space left. It may be some effort to swap, but they rely on a RAID anyways.

          • by BLKMGK ( 34057 ) <{morejunk4me} {at} {hotmail.com}> on Wednesday January 22, 2014 @01:28AM (#46032959) Homepage Journal

            You might want to read their report as they point out that swapping drives is COSTLY. Their experience is that it's better to pay a little more and not have to screw with the drive. Their report also documents which drives "pop out" of their RAID arrays and require costly attention. When a RAID array goes "bad" it can take time to recover, that's a cost that is almost certainly going to be more than what that troublesome drive saved them in the short run. These drives don't cost much more than $100 apiece and I'm betting their employees aren't being paid minimum wage so that hypothetical $50 savings isn't much especially if data is lost....

    • Weird; for the longest time WD was my go-to brand for hard drives, especially considering early on they had an incredible no-questions-asked replacement policy if you got in touch with their support. I had 2 seagate drives those days, and both failed within a few months of purchase. To this day I rarely use them. It wasn't until recently I started picking up hitachi (conventional) and samsung (SSD) drives, and I could not be happier with them. I really only buy WD/Seagate for external data warehousing.
    • Comment removed based on user account deletion
    • I remember when WD caviar drives were the most replaced component on systems I serviced

      Yeah, it seems that WD was the first of the HDD makers to crank up their power draw, and the excess heat in cases expecting older, cooler drives caused rampant failures. I don't think it's a coincidence that they've learned a lesson from that, and gone the other way... WD was the first offering "green" drives, and they do run cold and quiet.

      Personally, with the failure rates being reasonably close, there's one minor thi

  • by Anonymous Coward on Tuesday January 21, 2014 @07:01PM (#46030371)

    I built a new gaming rig the weekend after Black Friday and had to comparison shop all the consumer hard drives on the market (read: offered by Newegg). From the reviews, Hitachi is a relative unknown, Seagates tend to last just until their three year warranty is up, and Western Digital offers a five year warranty (and a price premium to match). I ended up grabbing the WD Black. Struck by how crap seek times are on 7200 RPM TB+ sized drives.

    • by Jamu ( 852752 ) on Tuesday January 21, 2014 @07:22PM (#46030547)

      I'm currently running a WD Green with an old Samsung SSD 830 as cache. I get the occasional pause if a game loads in something that isn't on the SSD. Overall though it's very fast with that combination and seek times, in particular, aren't an issue except for the first time you play a new game. A WD Black with an SSD as cache should be even better.

      My statistical insignificant experience with HDDs: WD: Old WD Caviar died, but the replacement lasted years. Two WD Greens still working. IBM (Now Hitachi): Had one die, two others still work. Samsung: Both still working. Seagate: Never had any.

  • by Dan Askme ( 2895283 ) on Tuesday January 21, 2014 @07:08PM (#46030429) Homepage

    After all this research, Backblaze still pick the highest failing drive.

    "What Drives Is Backblaze Buying Now?
    We are focusing on 4TB drives for new pods. For these, our current favorite is the Seagate Desktop HDD.15 (ST4000DM000)"

    So what was the point in this advert again?

    • by Derec01 ( 1668942 ) on Tuesday January 21, 2014 @07:25PM (#46030577)

      If they are fairly fault tolerant, a reasonable Seagate discount percentage would overcome that higher failure rate, even allowing for installation costs. They can spread that failure out. An individual cannot, therefore I appreciate that they released the statistics.

      • Right... if you can get 50 drives from Hitachi with a 5% failure rate or 100 drives from seagate with a 25% failure rate, it's still cheaper to go with seagate. If you're only buying 1 drive and have no backup, clearly steer away from them.

        • Or one seagate and a subscription to Backblaze!

          Note: I subscribe to Backblaze, having had two back-up drives fail for me in the last two years. Luckily, it was just the back-up drives...
      • by ranulf ( 182665 )
        A slightly more cynical person might think that by releasing these statistics, people might be less inclined to buy Seagate drives and thus the price they can negotiate becomes even lower when retailers are left with drives that don't shift as well. As the article says, "[they] buy drives the way you and I do: they get the cheapest consumer-grade drives that will work."
    • by AmiMoJo ( 196126 ) * on Tuesday January 21, 2014 @07:34PM (#46030659) Homepage Journal

      Maybe it's a canny strategy. Seagate drives are slightly cheaper because they are significantly less reliable, but tend to fail within the warranty period so they can return them for a referb that has at least been fully tested and maybe lasts another year or two.

    • by QuasiSteve ( 2042606 ) on Tuesday January 21, 2014 @07:34PM (#46030663)

      After all this research, Backblaze still pick the highest failing drive

      They're looking for 4TB models. They only cite two models without any further information.

      Seagate ST4000DM000
      vs
      Hitachi HDS5C4040ALE630

      You can look up technical details, benchmarks, etc. but perhaps the decision is simply in the price.
      Seagate: $164.99
      Hitachi: $295.00

      For the Hitachi model to start making sense, price-wise, that Seagate model would have to fail a lot more than their numbers are currently showing,

      ( And yes, I'd imagine they can squeeze better deals than regular consumer prices out of the companies - but then, they could do that for either brand, and probably through an intermediary anyway. )

    • by brianwski ( 2401184 ) on Tuesday January 21, 2014 @07:35PM (#46030669) Homepage

      After all this research, Backblaze still pick the highest failing drive.

      Disclaimer: I work at Backblaze. Every month we ask a list of about 20 suppliers for their best price on a variety of drives. There is a little spreadsheet we have that kicks out which drive to purchase based on those prices and drive failure rates. Even if Hitachi is the very highest reliability in our application, it only justifies a SMALL price premium because when one drive dies, we don't lose any customer data. It saves our datacenter IT team 15 minutes to *NOT* swap a drive, so that's worth 15 minutes of salary to us, but not more.

      • Commenting to undo moderation. Informative post.

      • by BLKMGK ( 34057 ) <{morejunk4me} {at} {hotmail.com}> on Wednesday January 22, 2014 @02:01AM (#46033141) Homepage Journal

        A question if I may - what are the experiences of BackBlaze when it comes to so called "bit rot"? You guys have enough drives in operation that this is a potential issue and I'm curious as to experience and countermeasures if any. With the rise of ZFS and BTRFS etc. this has been something that has caught my eye but I'm not yet sure it's something I'm inclined to worry about so i'm curious as to unbiased experiences. i know there has been an article or two in the past about how BackBlaze works but I don't recall these kinds of low level details being in it. Can you share?

    • by _merlin ( 160982 )

      In a situation where you have that many disks and fully redundant storage, the lower purchase cost may win out over better reliability in terms of total cost to the business. It's a very different equation if you aren't working in the same parameters. No-one is saying that our purchase decisions should be the same as theirs - they are just being kind enough to show stats over thousands of drives, which most of us couldn't afford to gather, so we can use that in making our own decisions.

      This is similar to

    • by DRJlaw ( 946416 )

      After all this research, Backblaze still pick the highest failing drive.

      They say: "We are willing to spend a little bit more on drives that are reliable, because it costs money to replace a drive. We are not willing to spend a lot more, though."

      They also further explain: "The good pricing on Seagate drives along with the consistent, but not great, performance is why we have a lot of them."

      There was a comment by Yevgeniy Pusin that included some wage and hour estimates versus the cost of buying better drives

  • by RogueyWon ( 735973 ) on Tuesday January 21, 2014 @07:09PM (#46030433) Journal

    I live in mortal terror of the Seagate Squeak. This is an intermittent sound that their 2 and 3 GB Barracudas sometimes start to make after a while, which sounds a little like a bird chirp. It's apparently caused by crap power management on the drive.

    There's actually very little information out there on whether or not it is a definitive precursor of drive failure, or just something those drives start to do after a while. However, it's so unsettling that I've ended up pre-emptively replacing two drives in my home PC which developed it.

    • by gigne ( 990887 )

      Ohhhhh. I just replaced 3 (yes 3!) dead Seagates that all stoppped working within the last month. The last one to go started chiping about 1 month before it died.

      I currently have 5 more Seagates that are either spinning down and then back up, or are power cycling for some reason. At last look, the SMART information told me everything was ok with the drive, but even now I can hear it starting the slow decline to click death.
      And no, they are not the "green" models that spin down every 2 seconds.

      • I have a long memory of failing WD drives, so I have been avoiding them like the plague for the last 6 years. It's only 2 data points, but:

        - 8x1.5TB array seagates in a RAIDZ2 configuration, ran essentially 24x7 for 2 or 3 years with no failures
        - 8x3TB array seagates in same configuration, been running for about 2 years with no failures.

        Seems my experience is not the norm... Or maybe I need to cross that 3-year barrier. Shame I fill them up too fast to make it 3 years so far.

        • by gigne ( 990887 )

          Indeed, it is all very subjective. I think the thing we can all agree on is that drives fail. Often.

      • in my experience smart ain't worth a wank. had tens of drives fail, only ever had any headsup from smart on one.

        heard from a data recovery service that the main probelm with the baraccuda's is the power supply board, they stock loads of them as most of their business is solved by replacing it.

  • What's the use case for any more than 50% write?

    • What's the use case for any more than 50% write?

      Backup. I have two raid 6 arrays. One is backup for the other. One is 100% write, the other isn't.

    • Archiving and backups springs to mind.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Back-ups. For example, at my company we have thousands of DLT tapes and fill about a dozen of them a week. Every Monday, they're moved offsite to a bank safe deposit box. Other than for testing, not a single time in the seven years I've worked here have we read a tape after it was written. We have 100% write. A friend works for Backblaze, and he just confirmed that they have basically the same situation. The vast majority of their users write data that they never read back.

      • If you never read it back, you have no clue if you've written it properly. Thus you have no idea if renting that safe deposit box is a waste of money or a wise investment.

        Do a bare metal restore on a cold system every once in a while.
      • I was working on a project with a large bank, and during one of my calls, the bank's project manager told me a comical story about their back-up procedures. They had switched from tapes to hard drives, and every day, when the truck drove up for that office's data back-ups (not actual banking data, but backups of all the administrative systems in that office), due to contracts which were still in force after years, it was a huge trailer truck with nothing to put into it but a single 3.5" hard drive. The co
    • Backups. You have a cronjob run the backup every night, or even every week. Maybe once a year your own system fails and you have to restore from backup - that gives a ~50:1 write/read ratio, or 98%, for the weekly, and ~360:1 write/read ratio (99.8%) for a daily backup.

      Coincidentally, TFS begins with "Backblaze, the cloud backup company".

    • by fisted ( 2295862 )
      With only four people having pointed it out so far, I'm not sure you got it yet.
      The answer is backups. Duh.
    • What's the use case for any more than 50% write?

      WOM

  • And what about... (Score:5, Insightful)

    by Obfuscant ( 592200 ) on Tuesday January 21, 2014 @07:12PM (#46030457)
    Enterprise grade disks? The cheapest disk is not always the cheapest disk in the long run. I can buy consumer disks for my disk servers, but when they fail I have to spend time replacing them and paying for them myself. When my enterprise grade disks fail, they're under warranty and are replaced "free".
    • Re:And what about... (Score:5, Informative)

      by brianwski ( 2401184 ) on Tuesday January 21, 2014 @07:22PM (#46030537) Homepage
      Disclaimer: I work at Backblaze. I object to the marketing term "Enterprise grade", it is confusing, and I'm not even sure they have the attributes you think they have. There is a completely different blog post Backblaze did about "Enterprise vs Consumer Drives" which comes to the conclusion Enterprise isn't better: http://blog.backblaze.com/2013... [backblaze.com]
      • by fahrbot-bot ( 874524 ) on Tuesday January 21, 2014 @07:54PM (#46030891)

        I object to the marketing term "Enterprise grade", it is confusing, and I'm not even sure they have the attributes you think they have.

        Obviously, they're designed to work on the Enterprise. Now whether that's the aircraft carrier, space shuttle, or star ship is unclear.

      • by Junta ( 36770 )

        In high performance computing, the enterprise drives make a large difference in performance characteristics.

        In terms of failure, I will say that enterprise disk subsystems+disks are extremely more cautious about disk health and will fail a very workable drive. They also tend to be continually scrubbing in the background to avoid unreadable sectors on disk due to not checking.

        Write-mostly workloads to a bunch of consumer grade disks will have errors that you may never detect. Frequently I have seen arrays

        • by brianwski ( 2401184 ) on Tuesday January 21, 2014 @08:49PM (#46031403) Homepage

          Write-mostly workloads to a bunch of consumer grade disks will have errors that you may never detect.

          At Backblaze, we try to pass over the data about once every two weeks. We re-read it from disk, recalculate a SHA1 checksum to make sure there wasn't any bits flipped or lost. It is my (informed) opinion that *ALL* hard drives and *ALL* configurations will have errors you may never detect unless you do this. You can't ever trust any file system.

          I think many people assume RAID does this checksumming, as far as I know RAID handles entire drives failing, but it doesn't really have anything to do with a drive that has begun to fail and is starting to flip a few bits here and there but the drive is still mostly responsive.

          • by MetricT ( 128876 )

            I manage a couple petabytes of scientific data (LHC) on our own object filesystem, and at that scale, RAID really isn't an option any more simply because you will, with unacceptable frequency, manage to have two drive failures simply due to the number of drives.

            All our new data is being stored with Reed-Solomon 6+3 redundancy. And I greatly look forward to the day when a drive can fail at 3 am and I don't have to get paged to repair it.

            And Seagate well and truly sucks. Not only do they have an unacceptabl

            • TLER is a marketing term used by Western Digital, so of course Seagate doesn't have it. Seagate's similar feature is called Error Recovery Control (ERC), while Hitachi calls it Command Completion Time Limit (CCTL). There's a similar set of terms for vibration limitation features that are important when you put a bunch of drives into one chassis, and those are also specific to each manufacturer.

              I've found Western Digital's cheapest line of consumer drives do terrible things when hitting a failure, basicall

    • Obviously too expensive. The cheapest Seagate "Enterprise Capacity" 4TB drives cost $320 to $380 depending on where you buy them. Even if they have 100% reliability, not a single failed drive, they're 50-100% more expensive. Unless failure rates are in the 20%+ range, I doubt it would be worthwhile.

      And whattya know, their current preferred drive is a 4TB Seagate desktop drive with a 4% annual failure rate. That's actually worse than competing desktop drives that cost only a few bucks more, which means, to B

  • That's interesting (Score:5, Informative)

    by IWantMoreSpamPlease ( 571972 ) on Tuesday January 21, 2014 @07:13PM (#46030463) Homepage Journal
    For the past 11 years, I used nothing but Seagate drives in my builds for clients. Over those past 11 years, I built something like 20 systems a month (on average) with occasional large scale orders of 200. The number of failed Seagates I could count *on one hand* YMMV clearly, but I stand behind Seagates.
    • by fisted ( 2295862 ) on Tuesday January 21, 2014 @07:36PM (#46030689)
      Well I don't know how many fingers your have on one hand, but it can't be possibly be just five.
    • by AmiMoJo ( 196126 ) *

      The problem with this kind of research is that it only really applies to servers. Running 24/7, writing 24/7, probably running quite warm. Maybe Seagate drives are particularly bad in this set up, but fine for typical desktop systems.

      • The problem with this kind of research is that it only really applies to servers. Running 24/7, writing 24/7, probably running quite warm. Maybe Seagate drives are particularly bad in this set up, but fine for typical desktop systems.

        The Seagate ST3250623A 250 GB disk in my MythTV system (used for both system and video storage) has been running 24/7 since Friday January 19th, 2007. I got this drive because, at the time, it was reported as reliable and very quiet (which it is).

        MythTV Recording Stats:

        • Number of shows: 909
        • Number of episodes: 8030
        • First recording: Friday January 19th, 2007
        • Last recording: Tuesday January 21st, 2014
        • Total Running Time: 7 years 1 day 12 hrs 48 mins
        • Total Recorded: 10 months 17 days 14 h
    • by dj245 ( 732906 )

      For the past 11 years, I used nothing but Seagate drives in my builds for clients. Over those past 11 years, I built something like 20 systems a month (on average) with occasional large scale orders of 200. The number of failed Seagates I could count *on one hand* YMMV clearly, but I stand behind Seagates.

      Some of this may be peculiar to where you are sourcing your drives from and how carefully they ship. The same drive on Amazon vs Newegg usually has dramatically different ratings. I don't think the hard drive vendors are making drives of different quality for different retailers, but the retailers definitely have different packing and shipping standards.

    • by Chemisor ( 97276 )

      I'd second that. In my experience, Seagate is more reliable than WD. Of course, I don't go through thousands of drives, but a 26% failure rate just sounds unbelieveable. Something is fishy with the survey, or maybe it is just their specific workload that is particularly bad for Seagate drives.

  • Comment removed based on user account deletion
  • The Deskstar wasn't nicknamed Deathstar for nothing, back in the day...

    • The Deskstar wasn't nicknamed Deathstar for nothing, back in the day...

      The IBM Deskstar series was superb; faster than most other drives, excellent size/price ratio, and very reliable too. The entire OEM business cheered it, and IBM was solidly trouncing the competition, until that fatal release of the Desktar 75GXP around 2001 where a bad firmware combined with a faulty factory in Hungary started to kill the drives prematurely (too much cutting edge technology).

      I owned a lot of IBM Deskstar drives and they all performed really well and none of them died before they were obsol

  • Depends on model (Score:5, Informative)

    by Solandri ( 704621 ) on Tuesday January 21, 2014 @07:46PM (#46030781)
    If you RTFA, they break down the failure rates by model (no pun intended). There's a pretty huge variation between models (or at least the Seagate models). That's also what I saw in the StorageReview reliability database [storagereview.com] back when people were actively updating it (unfortunately you have to add a drive to the database to get access to it, so it was never very popular). The same manufacturer can make a gem and a stinker of a model. e.g. the IBM 75GXP (aka Deathstar) drives had one of the highest failure rates in the database. The drive which replaced it (60GXP I think) had one of the lowest failure rates in the database.

    So it's more nuanced than "Seagate stinks, Hitachi rules." (Hitachi is a subsidiary of WD now, operating separately only because that was a condition China placed on them before they'd OK the merger.)
  • mirrors my experience too. i'm very impressed with hitachi drives, western digital/samsung are pretty poor and seagate are just plain shite.

  • by urbanriot ( 924981 ) on Tuesday January 21, 2014 @07:58PM (#46030933)
    If there's one thing you can credit Seagate for, it's consistency - since the 90's the (R) for refurb on their drives has been the kiss of death, guaranteeing another failure within 3 months of receiving the replacement. While it's great they have a clearly understandable domestic RMA team, they often send you a broken drive to replace your defective drive so you now have to pay to ship two drives back.

    If you politely ask them to send you a new drive since they keep sending you bad drives, they'll politely tell you they can't guarantee you a healthy drive. Typically with our servers we're guaranteed a bad Seagate SAS 10k drive with a bank of 10 drives and we're pretty much at a 100% failure rate with RMA drives and many times the RMA drives they send us are broken. Seagate (R) drives should never be installed in a server or anything reliable... heck, I'd keep Seagate drives out of anything you want to remain reliable.
  • You are purchasing STORAGE-TIME. Not just storage. Storage that disappears is useless.

    1 terabyte of storage that lasts 2 years is twice as useful as 1 terabyte of storage that lasts 1 year.

    Always buy whatever drive is warranted for 5 years. I pay 50% more for this! It's worth every penny. My terabyte-years are the cheapest.

    I have a 20TB LAN spread out over 3-4 computers (depending on the year). The only major crashes I've had on anything under 5 years old was, ironically, the 2 WD Cavier Green's I accidentally bought (meant to buy black; got a little slaphappy with the shopping cart one afternoon). They both died within 6 months.

    The choice now is: Western Digital Cavier Black. The study posted in this article will not acknowledge this as they bought the cheapest drives possible. It may make business sense with redundancy, but i do not RAID. Too expensive. (Ironic?)

    • I kinda have to disagree. You're quite correct in your analysis, except the whole bit about your terabyte-years being the cheapest which was the point if your post.

      More accurate may be that your "terabyte-warranteed years" rate is the cheapest, but in terms of actual usage, many people may disagree with you. I haven't had a Seagate drive fail since 2001. I think the oldest I have in a system somewhere is 2004, but that's besides the point - that drive is priced out "terabyte-years" where years = 10. I have

  • I have had so many Seagate drives fail on me in the past 10 years it's not even funny. One client of mine had a Seagate fail in their server's RAID-1 array, then not more than a month later, the other one failed. Musta been a(nother?) bad batch.

    Western Digital has always been a solid drive and that's what I recommend to my clients. Can't say much for the others, because I normally only deal with them when I'm replacing them - either for upgraded storage or because they've failed/are failing.

  • Since their workload is 100% write, I recommend they use WOM (Write Only Memory).

  • I guess Hitachi fixed the DeathStar issues. I remember those old IBM Deskstar having horrible failure rates, then Hitachi bought the division.

  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Tuesday January 21, 2014 @08:44PM (#46031351)
    Comment removed based on user account deletion
  • by organgtool ( 966989 ) on Tuesday January 21, 2014 @09:06PM (#46031533)
    I bought a Samsung (which is really a rebranded Seagate) to use in my HTPC and less than a year later, it died. I sent it back and got a replacement, but it was a huge pain to have to reinstall Mythbuntu and XBMC, get the two programs reconfigured and communicating again, as well as re-import all of my TV shows, movies, and music and fix all of the broken metadata. Since I suspected that the drive may have been running hot, I installed temperature monitoring software for the hard drive and had it record the temp once per minute. Less than a year later, that drive began to fail. I looked at the temperature logs while the drive still worked and it was pretty steady at about 40 degrees Celsius. I thought this may have been too hot, but when I looked up the specs for the drive, it was rated to operate at up to 60 degrees Celsius. So that's two Seagate drives that failed in less than a year each. Even though I may be able to get another replacement from Seagate for the failed drive, I wouldn't bother wasting my time reinstalling and reconfiguring the HTPC apps just to have the drive fail again, so I broke down and bought a WD Green since my other WD's have been solid over the past several years.
  • by cpm99352 ( 939350 ) on Wednesday January 22, 2014 @02:27AM (#46033279)
    I searched on 4+ comments, and didn't see anything, so here is Google's study> [googleusercontent.com] (they go through a lot of drives)

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...