Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Data Storage Stats Hardware

25,000-Drive Study Gives Insight On How Long Hard Drives Actually Last 277

MrSeb writes with this excerpt, linking to several pretty graphs: "For more than 30 years, the realm of computing has been intrinsically linked to the humble hard drive. It has been a complex and sometimes torturous relationship, but there's no denying the huge role that hard drives have played in the growth and popularization of PCs, and more recently in the rapid expansion of online and cloud storage. Given our exceedingly heavy reliance on hard drives, it's very, very weird that one piece of vital information still eludes us: How long does a hard drive last? According to some new data, gathered from 25,000 hard drives that have been spinning for four years, it turns out that hard drives actually have a surprisingly low failure rate."
This discussion has been archived. No new comments can be posted.

25,000-Drive Study Gives Insight On How Long Hard Drives Actually Last

Comments Filter:
  • Um.. (Score:5, Interesting)

    by Pikoro ( 844299 ) <init@in i t . sh> on Tuesday November 12, 2013 @09:27AM (#45400287) Homepage Journal

    Yah, except for my Western Digital Green which failed 3 days after the warranty expired. And similar accounts on newegg...

    • Re:Um.. (Score:5, Insightful)

      by alen ( 225700 ) on Tuesday November 12, 2013 @09:33AM (#45400325)

      over the last 20 years i've used almost every brand of hard drive and have had all the brands fail at least once. every single brand has had quality issues at one time or another

      • I miss Micropolis. I had an array of their 4.3 GB 10K RPM SCSI Tomahawks close to 20 years ago. A friend of mine has them now and they are still spinning. They sounded like an Air Bus A320 and could heat a large closet, but they were fantastic. I don't think I ever had a Micropolis drive fail. Just retired them due to larger more efficient quieter drives becoming available.

        I think it all has to do with luck as far as which brand works for some people though. I know people that have never had WD drives fa

        • they are still spinning

          Other than the novelty, why would anyone waste the electricity for 4.3GB of storage space (or even multiples of 4.3GB)?

          • they are still spinning

            Other than the novelty, why would anyone waste the electricity for 4.3GB of storage space (or even multiples of 4.3GB)?

            As long as they are doing what they need to be doing, how much electricity savings are you going to get and is it worth the PITA to change them. I have a system with a 15+ year old 12 GB drive in it. I have a much lower wattage appliance to replace it, and have for several months now. I just haven't had the time to swap it out.

        • Re:Re-furbs (Score:5, Interesting)

          by gmclapp ( 2834681 ) on Tuesday November 12, 2013 @11:29AM (#45401703)
          I've actually had the most luck with refurbished drives. If you find a brand on Newegg that's fairly new, you eliminate the re-furbs that failed due to wear and tear. The ones that are left are DOA drives that got sent back because of common manufacturing flaws. These drives are 100% QC tested and I've yet to have one fail. The awesome kicker is that the stigma of a re-furb virtually guarantees that they'll be cheaper as well.
          • Re:Re-furbs (Score:5, Informative)

            by icebike ( 68054 ) on Tuesday November 12, 2013 @01:57PM (#45403751)

            If you are sure that they were a relatively new model, and the refurb was a FACTORY refurb, that might be a good method. If Joe Stocking Clerk did the refurb, who knows what you will get.

            When installing, and periodically there after, It is wise to run something like smartctl -a /dev/sd? on your drives and check the power on hours and power cycle count. (Not to mention the reallocated sector count and spin retry).

            You would be surprised how many refurbs are actually fairly heavily used, with a lot of hours.

            My current server's raid array is averaging 5.9 years, but has only seen 53 power cycles over that time. I actually tend to believe (without a great deal of evidence) that power cycles are harder on drives than running constantly.

            Google actually did a similar study [googleusercontent.com] some years ago. Their study of over 100,000 drives largely agreed with the present study, right down to the three-node distribution of failures over time.

      • Re:Um.. (Score:5, Funny)

        by andy55 ( 743992 ) on Tuesday November 12, 2013 @10:04AM (#45400681) Homepage
        Who is General Failure anyway, and why does he keep trying to read my hard drive??
      • Re:Um.. (Score:5, Funny)

        by greenfruitsalad ( 2008354 ) on Tuesday November 12, 2013 @10:12AM (#45400769)

        For the last 4 years I've had to deal with WD RE2, RE3 and RE4 hard drives. Although they are enterprise sata hard drives, they seem to fail at a rate much worse than the consumer ones Backblaze based their report on. I see much fewer problems in the first year but they usually start dying when they reach 16000 power-on hours, with only about 40% exceeding 26000 hours.

        Having said that, I count sector reallocation as a failure. In my experience, as soon as a disk has non-zero value in Reallocated_Sector_Ct and Reallocated_Event_Count, it usually fails completely within a few weeks or months.

        Fortunately, WD has a tool on their website which you must run before they give you an RMA number. I managed to get its source code:

        int main()
        {
              printf ("Disk OK, no errors found.");
              return 0;
        }

        • you have discovered the joy of vendor supplied diagnostic software. It is all designed to deny failure/replacement.

          I had a dell system running horribly badly. I discovered the cause: the drive had wide spread errors and had remapped a good section of data that happened to be used by a VM. Run the VM and redirected reads brought the system to a crawl. It was somewhere in the thousands of reallocated sectors with thousands more pending and millions of redirected reads. SMART claimed the drive was good, all wh

          • by icebike ( 68054 )

            We don't use the simple smart test, or the vendor's test. We either use the linux version of smartctl (smartctl -a /dev/sda )
            or a third party one for windows.

            By the way you have to find a way to get around the so called "raid controllers" that most manufacturers use on consumer grade machines, because it masks stuff that is happening at the hardware level. You need to talk to the drive directly, not to some fake-raid controller.

      • I've never had a hard drive fail on me, across 5 PC generations. I booted my old 486 a few months ago, one last time before disposing on it. Also no failure after ...21 years. Maybe I just got lucky though.

      • over the last 20 years i've used almost every brand of hard drive and have had all the brands fail at least once. every single brand has had quality issues at one time or another

        Sooner or later all drives wear out. I usually lose 1 or 2 drives a year. I mostly buy Seagate. I liked them best when 7-year guarantees were common, but I've only had one Seagate actually fail within warranty.

        Western Digital, on the other hand, is something I avoid. One project I worked on was seeing a 30% infant mortality rate. And that included the drive the sysadmins installed in my development system and then didn't bother to keep up the backup schedule on. Lost 2 weeks of work that way.

        More recently,

      • Re:Um.. (Score:4, Informative)

        by Jason Levine ( 196982 ) on Tuesday November 12, 2013 @10:31AM (#45401021) Homepage

        I'm in the market for a new external hard drive (my 1TB one is getting too small for my backups) and kept looking at Seagate. Unfortunately, my father-in-law had a Seagate which broke rather quickly and my wife is convinced that this means all Seagate drives are junk. The reality is that Seagate, Western Digital, and any other large hard drive manufacturer is going to have a lot of failed drives by the sheer fact that they produce a lot of drives. Since people who are happy with their products don't post comments as often as people who aren't happy, you're likely to get a higher percentage of complaints in the reviews than percentage of people who actually experienced problems.

    • Re:Um.. (Score:5, Funny)

      by Joining Yet Again ( 2992179 ) on Tuesday November 12, 2013 @09:34AM (#45400339)

      Maybe it's a CONSPIRACY in which they've invested ALL their manufacturing PRECISION into guaranteeing that the drives will fail precisely THREE DAYS after WARRANTY.

      Consider this! You register for warranty, and you enter the purchase date, right? What if... WHAT IF... some FIRMWARE CODES in the drive pick up this transaction and STORE THE INFORMATION IN FLASH. Then then starting the day after warranty expiry the drive STARTS TO DO BAD THINGS f.e. not park properly or run just a little too slowly or maybe even there's like a secret drop of DESTRUCTION SAUCE which is released onto the platters at this time.

      Anyway you see where I'm getting here? REPTILE OVERLORDS are conspiring with 9/11 truthers (yeah they're in on it! it's all a false flag operation) to destroy hard drives.

      And this whole study.

      Is.

      SPONSORED BY A JEWISH-OWNED CORPORATION.

      Yeah.

      • Re:Um.. (Score:4, Funny)

        by game kid ( 805301 ) on Tuesday November 12, 2013 @09:47AM (#45400503) Homepage

        Not one connection to the NSA, or Snowden's ex-girlfriend, or the World Bank, or two employees at Infowars who spoke on the condition of anonymity to discuss their true jobs with the Bilderberg Group? FAIL.

      • Re:Um.. (Score:5, Funny)

        by Gavagai80 ( 1275204 ) on Tuesday November 12, 2013 @09:52AM (#45400571) Homepage
        This would explain why I've never had a hard drive fail on me yet in my life: I've never registered for a warranty on one. If you don't get the warranty, the reptiles don't bother sabotaging you.
      • They can't program the devices to fail on a specific day. That's stupid. They're just designed with a secret substance that reacts with that day's PLANNED CHEMTRAIL composition.

      • You're clearly a disinfo agent. Reptilians combined with truthers make sure the drive start malfunctioning at a certain date by simply keeping a watch on the metadata written on the disk itself for created or updated files. For the few cases of 100% encrypted storage they rely on internal counters that officially do S.M.A.R.T. metering.

      • Maybe it's a CONSPIRACY in which they've invested ALL their manufacturing PRECISION into guaranteeing that the drives will fail precisely THREE DAYS after WARRANTY.

        I believe it would be cheaper to simply make the drives fail using a random generator and a firmware routine. The effect would be the same but software solutions are usually cheaper.

      • And this folks is proof positive, that aluminum foil [healthcentral.com] is bad for you.

  • by xxxJonBoyxxx ( 565205 ) on Tuesday November 12, 2013 @09:33AM (#45400327)

    >> hard drives actually have a surprisingly low failure rate.

    You call a 20% failure rate in 3 years LOW? My career rate is closer to 5% over 5 years - who keeps buying all those crappy hard drives?

    • by alen ( 225700 )

      i'm sure they have data on more hard drives than what you have handled

      • They have a lot of drives, but their data is only from 4 years. The article would be more meaningful if they had been gathering data for a longer time.rather than just resorting to crap like

        "engineer, Brian Beach, speculates that the failure rate will probably stick to around 12% per year.

        • I was curious to look at this article until I saw that it was based on only 4 years of data, and concluded that it was of no real value.
      • They use consumer hard drives not enterprise, They say themselves that this data probably does not really apply to ent drives. BB also uses a custom chassis that a lot of people would take issue with as far as potential vibration etc. That is a great deal different than a well engineered SAN or even server and affects wear and performance.

        • They use consumer hard drives not enterprise, They say themselves that this data probably does not really apply to ent drives. BB also uses a custom chassis that a lot of people would take issue with as far as potential vibration etc. That is a great deal different than a well engineered SAN or even server and affects wear and performance.

          In other words, this is a typical Slashdot article with little or no meaningful information.

        • I'd also like to know if there is any difference seen in horizontal vs vertical-side mounting. Seems like if they were smart they would have had both configurations in the sample.
          • They are just sharing data on there particular setup not actually testing anything. Backblaze loves to blog it's a marketing tool after all. There hardware really does not have any place outside of there market. Lets face it you can cram 48 raw TB into a 1ru with some actual processing power, ram and a decent interconnect. They are slightly less dense with very little CPU, ram, or interconnect.

    • by houstonbofh ( 602064 ) on Tuesday November 12, 2013 @09:39AM (#45400405)

      >> hard drives actually have a surprisingly low failure rate.

      You call a 20% failure rate in 3 years LOW? My career rate is closer to 5% over 5 years - who keeps buying all those crappy hard drives?

      They do have a slightly more harsh environment than your desktop. On for 24/7 to start... And in a box with a lot of other vibrating drives for another.

      • >> more harsh environment than your desktop

        Ya' mean like my server room?

        Gotta remember...some of us do work in IT for a living. :)

        • Which is why I've had to return more than a dozen Seagate drives under warranty in the last two years from one sixteen-bay server; however, they were all one of two very close models so I'm more inclined to believe it was just a bad batch or bad firmware than a larger issue with Seagate. Unfortunately, the higher-ups insists on replacing failed RAID drives with the same model/firmware.

          • Like the IBM Deskstar. I had 4 fail, they kept replacing it with the same drive/firmware. Turns out it was a firmware bug corrupting the drive. Finally on the 5'th fail they gave me a larger different drive. I also got a letter from some law firm asking me to join a class action suit against IBM for knowingly distributing/replacing bad drives with bad drives.
    • That is what I was thinking.
      When they said a "surprisingly low failure rate" I was thinking 20% failure rate in 10 year. (AKA outlasting the usable life of the computer)
      But 3 years, with an average usable life span of 5 years means there is a more then an 1/5 chance that you will need a new drive isn't really that good.

    • by nerdbert ( 71656 ) on Tuesday November 12, 2013 @09:58AM (#45400627)

      Careful. These are consumer grade drives. In other words, they're meant for use by typical consumers, where the disk spends 99.9999% of its time track following and running in a relatively low power state. But the folks who are using them are using them as enterprise drives, running 24/7 in racks with other drives, in a hot environment. Something that is very different from what they were designed for. Heat is the enemy of disk drives.

      Honestly, if you want enterprise drives buy enterprise drives. These folks don't (too cheap on the initial cost so they'd rather pay on the backend?), so they get higher failure rates than "normal" folks do for their drives. This is like buying a Cobalt and going off-roading with it -- it'll work, but not for long before something breaks because it wasn't designed to be used that way.

      • by the_other_chewey ( 1119125 ) on Tuesday November 12, 2013 @10:33AM (#45401041)

        Careful. These are consumer grade drives. In other words, they're meant for use by typical consumers, where the disk spends 99.9999% of its time track following and running in a relatively low power state.

        That would amount to about 32 seconds of activity per year.
        There's more drive activity than that in a single Windows boot.
        Stop making up numbers.

      • Honestly, if you want enterprise drives buy enterprise drives. These folks don't (too cheap on the initial cost so they'd rather pay on the backend?), so they get higher failure rates than "normal" folks do for their drives.

        It might make good economical sense to buy "consumer" drives, if the price difference is enough.

        Since they are using RAID to keep uptime and backups to prevent data loss, and don't need ultra-fast storage, the comparison would be between consumer and the "cheap" enterprise drives. Although you can now get drives the like WD Red for about a 5% premium over the WD Green, those are really slow drives. The WD Black vs. the WD RE line sees more like a 35% price difference with the same 5-year warranty.

        That mea

    • by Nyder ( 754090 )

      >> hard drives actually have a surprisingly low failure rate.

      You call a 20% failure rate in 3 years LOW? My career rate is closer to 5% over 5 years - who keeps buying all those crappy hard drives?

      Apparently me, I've had 6 harddrives die just over a year of getting them over the last few years. And that is out of 8 drives total.

      On the other hand, i have 20 years old SCSI drives that still run. 40mb drives, woot! =)

  • Re: (Score:2, Offtopic)

    Comment removed based on user account deletion
  • by EmperorOfCanada ( 1332175 ) on Tuesday November 12, 2013 @09:40AM (#45400419)
    I would love to see the breakdown(ha ha) by brands. But I would also like to see if they had temperature variations or power cycling stats.

    Does a HD that is always on last for more or fewer hours? Ideal temperature? And a hard one to test, vibrations.
    • This. The test doesn't tell me how long my NAS drives should last given periodic usage..that is a few hours a day. It seems the test drives were all continuously spinning, but were they also performing read-writes continuously? More info desired.
    • by jhumkey ( 711391 ) on Tuesday November 12, 2013 @09:49AM (#45400533) Journal
      Only my personal experience but as for "power cycling" . . . I follow one basic rule.

      If you turn it off every night (when you go home from work) . . . it'll work fine, and last five years . . . then you're in the danger zone.
      If you LEAVE IT ON for weeks at a time and NEVER turn it off . . . it'll work fine, and last five years . . . then you're in the danger zone.
      What you NEVER want to do is . . . run it for a year (like at a factory plant) then turn it off for a week vacation. You're toast. (In my limited experience of 28 years) . . . if you turn it off that week . . . there is a 75% chance . . . it'll never turn on again.

      I don't know if the "grease" settles, or the metal binds . . . I just know if its been on a year . . . don't turn it off for more than an hour or two if you want it to continue to work.
    • Re: (Score:3, Informative)

      by Anonymous Coward

      Google study was mentioned in backblaze's own blog on this subject [backblaze.com], the article misrepresents things a bit imo. Doing some more reading of their blog and when the floods hit Thailand they actually harvested harddrives from external drives (another blog-entry) [backblaze.com]; makes me think maybe those drives are crappier by default / endure worse treatment on the way from the factory to the consumer.

      • by tlhIngan ( 30335 ) <slashdot&worf,net> on Tuesday November 12, 2013 @12:05PM (#45402199)

        Doing some more reading of their blog and when the floods hit Thailand they actually harvested harddrives from external drives (another blog-entry); makes me think maybe those drives are crappier by default / endure worse treatment on the way from the factory to the consumer.

        They are, actually. They're often custom made for the purpose - because when you think about it - what's the point of a high speed hard drive when USB is the limiting factor?

        USB mass storage doesn't support more than one outstanding request at a time, so features like NCQ and all that are pointless. Large caches were pointless in a world of USB 2.0 and the data can be pulled from the media faster than the interface (has there been any USB 2.0 hard drive that gets more than 20MB/sec transfer? That's less than half the theoretical... and most mechanisms can pull 40+MB/sec off the inner tracks). Likewise, there's no point putting high speed drives in there - the latency and seek times are pretty much the same, so 7200RPM vs 5400? No big difference.

        And of course, they're popular and cheap and unless you can put value-add on there, people pay little, so the goal to make them really cheap is paramount. Heck, the later original Xboxes had 8GB drives that were bare bones cheap - Seagate got rid of a ton of bearings and other stuff.

        Heck, in some USB3.0 drives, especially those by WD and Seagate, they don't use SATA anymore - the drive electronics speak USB 3.0 natively with onboard controllers.

  • by WoodstockJeff ( 568111 ) on Tuesday November 12, 2013 @09:42AM (#45400439) Homepage

    "Surprisingly, despite hard drives underpinning almost every aspect of modern computing (until smartphones), no one has ever carried out a study on the longevity of hard drives — or at least, no one has ever published results from such a study."

    I recall reading a /. story from Google on THEIR experiences with hard drive longevity several years ago, over a much larger sampling of drives. Even linked to a PDF with the particulars....

    Maybe they are to small to count, compared to an upstart backup company...

  • Only four years? (Score:5, Insightful)

    by Bill_the_Engineer ( 772575 ) on Tuesday November 12, 2013 @09:44AM (#45400463)

    Four years isn't long enough. Come back to us when you reach 6 or 8 years. The study looked at drives during the warranty period (WD drives have 5 year warranty).

    Also the information they presented doesn't show that low of a failure rate.

    • by decsnake ( 6658 )

      Does anyone actually use drives in a commercial environment that are more than 3-4 years old? By the time they are that old they aren't worth the space they take up and the power they consume, i.e. 1TB per form factor as opposed to 3TB in the same form factor.

      • From my experience, the majority of servers don't need to expand their storage much over time. We have a few servers with beefy storage for databases/file shares/email and the rest of them store most of their data on a NAS or just don't work with an expanding set of data (terminal servers, application servers, print servers). The end result is that we have a lot of 6 and 8 year old servers still spinning most of their original disks. The servers we do expand the storage for usually have disks added, not rep

    • Four years isn't long enough. Come back to us when you reach 6 or 8 years. The study looked at drives during the warranty period (WD drives have 5 year warranty).

      Also the information they presented doesn't show that low of a failure rate.

      Yes indeed. Nobody should publish any data at all until the minimum time requirements of Bill_the_Engineer are met!

      This is still interesting, and will get more so as more years are added on. (You did read the bit where they say they're going to keep updating the data, didn't you?)

    • by CAIMLAS ( 41445 )

      How is it not long enough? It corroborates existing, known information - even 'best practice' of assuming drives are more likely to fail after 3 years, as well as that if a drive survives a year, they're likely to survive 3.

      Things are mostly the same across the board. I'm not sure why anyone claiming 10% in the first year is 'low'.

  • 20% is bad... (Score:5, Interesting)

    by Lumpy ( 12016 ) on Tuesday November 12, 2013 @09:47AM (#45400505) Homepage

    99% of consumers have no backups and no raid, so 20% failure rate = 20% chance of losing EVERYTHING.

    I call that an unacceptably high failure rate.

    And note: I also have seen a 20% failure rate at home. Higher if I use the crap WD green drives.

    • I think what you mean is a 20% chance of having a teachable moment.

    • by delt0r ( 999393 )
      So because people are stupid hard drives need to be perfect? If you don't have backups you *will* lose your data one day. Even a 5x improvement in hdd reliability won't change that.
      • Doesn't even have to be a drive failure for data loss to occur. You accidentally deleted a file? Too bad.

  • Backblaze has done their study in their datacenter. This means they did it in a controlled environment. I'm sorry but I don't have an AC where my computer is... the air is not filtered. my PC is in my basement (as some people put it in a room) where theres 30-40% humidity using normal crappy air i breath like we all do. Some of us (not me) smoke and live in places with lots of humidity or dry air as well. Is this taken into account...nope.

    Well this study is to be taken with a grain of salt as lots of varia

    • It won't save you from heat or humidity; but the little breathing holes in HDDs are very aggressively filtered. The last few I butchered seemed to be some sort of carbon material with extremely fine pores, in a teflon pouch, also presumably with very fine pores, almost a cm thick over the air hole. Dust and whatnot might well play hell with the cooling in a PC, and smoking does pretty dreadful things indeed; but HDDs are serious about what they breathe.
  • Useless study (Score:5, Insightful)

    by slashmydots ( 2189826 ) on Tuesday November 12, 2013 @09:48AM (#45400519)
    This study was completely useless. WHAT BRAND WERE THEY?! Hitachis and Fujitsus have a higher failure rate by a factor of about ten than a top of the line Seagate drive.
  • Comment removed based on user account deletion
  • Next step (Score:5, Insightful)

    by jones_supa ( 887896 ) on Tuesday November 12, 2013 @10:02AM (#45400673)
    Run the test longer and show us the data for span of 10 years. Additionally, reveal the brands and models of the disks. Thanks.
  • by decsnake ( 6658 ) on Tuesday November 12, 2013 @10:02AM (#45400675)

    I worked at an on-line service for several years way back in the late 90s and early 00s and this data is consistent with the data I collected then over perhaps an order of magnitude more units. While 25K drives may not be a lot in the scale of today's internet services it is more than enough to draw statistically valid conclusions, as opposed to that, oh, 1 drive in your desktop gaming system that failed 1 day after the warranty expired.

  • I remember that all my Deskstar drives failed [slashdot.org] after each other very soon...

    Regarding those statistics, I think we should rule out some brand and model well known for failure, because, as soon as the information goes public, we need to replace them with some other brand/model.
    With such strategy we can achieve a lower effective failure rate.

  • My first hard drive was a Seagate MFM 20MB drive - an ST-225. It still performs flawlessly, and still gets used at least once a month. It still sounds like a small jet taking off... So anecdotally on my evidence the most reliable drive ever is the Seagate ST-225.

    You're welcome.

  • This isn't surprising. To summarize: most early failures happen within the first year, and after 3 years, the survival rate drastically drops off.

    This is a well-known phenomenon in IT storage, and it's why people will typically start replacing storage (or individual disks with any pre-fail signs) after 3 years.

    That said, of the many disks I have still in service, most of them are older than 5 years, and I have some which are pushing 15 years old now without any concern of immediate failure. I've had pretty

  • In the first phase, which lasts 1.5 years, hard drives have an annual failure rate of 5.1%. For the next 1.5 years, the annual failure rate drops to 1.4%. After three years, the failure rate explodes to 11.8% per year. In short, this means that around 92% of drives survive the first 18 months, and almost all of those (90%) then go on to reach three years.

    Extrapolating from these figures, just under 80% of all hard drives will survive to their fourth anniversary.

    1.00 (total) - .051 (failure rate for 1.5 years) = .949 (non-failure), but only 92% survive for 18 months (a.k.a. 1.5 years)? What?

  • My experience (Score:3, Insightful)

    by Hamsterdan ( 815291 ) on Tuesday November 12, 2013 @10:56AM (#45401311)

    With my limited sample of hard drives (around 50 around the years), what I've found so far. The drives range from 1.2GB to 1TB models, SCSI/IDE/SATA

    *ALL* but 1 or 2 of my Maxtors either died or sounded like a bandsaw pretty soon

    My Seagates are all dead save 1 or 2

    My WD seem fine, albeit some are noisy, but my two 1TB green pulled from external cases are pretty much about dead.

    I've had only 1 out of 10 SCSI drive die so far.

    So my experience so is Maxtor was crap, when Seagate bought them it lowered Seagate's reliability. And since *ALL* the drives I've pulled from enclosures are dead, I'm guessing they are selling their crappiest drives to other manufacturers.

    The problem is they are not trying to make better drives, they are trying to make *bigger* drives. Fuck a 4TB drive, gimme a reliable 1TB.

    All my obsolete hard drives were dismantled and recycled, and from what I saw, the more recent the drive, the cheaper it's made (and less reliable)

    I should've kept statistics while dismantling them.

"It takes all sorts of in & out-door schooling to get adapted to my kind of fooling" - R. Frost

Working...