25,000-Drive Study Gives Insight On How Long Hard Drives Actually Last 277
MrSeb writes with this excerpt, linking to several pretty graphs: "For more than 30 years, the realm of computing has been intrinsically linked to the humble hard drive. It has been a complex and sometimes torturous relationship, but there's no denying the huge role that hard drives have played in the growth and popularization of PCs, and more recently in the rapid expansion of online and cloud storage. Given our exceedingly heavy reliance on hard drives, it's very, very weird that one piece of vital information still eludes us: How long does a hard drive last? According to some new data, gathered from 25,000 hard drives that have been spinning for four years, it turns out that hard drives actually have a surprisingly low failure rate."
Um.. (Score:5, Interesting)
Yah, except for my Western Digital Green which failed 3 days after the warranty expired. And similar accounts on newegg...
Re:Um.. (Score:5, Insightful)
over the last 20 years i've used almost every brand of hard drive and have had all the brands fail at least once. every single brand has had quality issues at one time or another
Re: (Score:2)
I miss Micropolis. I had an array of their 4.3 GB 10K RPM SCSI Tomahawks close to 20 years ago. A friend of mine has them now and they are still spinning. They sounded like an Air Bus A320 and could heat a large closet, but they were fantastic. I don't think I ever had a Micropolis drive fail. Just retired them due to larger more efficient quieter drives becoming available.
I think it all has to do with luck as far as which brand works for some people though. I know people that have never had WD drives fa
Re: (Score:3)
they are still spinning
Other than the novelty, why would anyone waste the electricity for 4.3GB of storage space (or even multiples of 4.3GB)?
Re: (Score:2)
they are still spinning
Other than the novelty, why would anyone waste the electricity for 4.3GB of storage space (or even multiples of 4.3GB)?
As long as they are doing what they need to be doing, how much electricity savings are you going to get and is it worth the PITA to change them. I have a system with a 15+ year old 12 GB drive in it. I have a much lower wattage appliance to replace it, and have for several months now. I just haven't had the time to swap it out.
Re:Re-furbs (Score:5, Interesting)
Re:Re-furbs (Score:5, Informative)
If you are sure that they were a relatively new model, and the refurb was a FACTORY refurb, that might be a good method. If Joe Stocking Clerk did the refurb, who knows what you will get.
When installing, and periodically there after, It is wise to run something like smartctl -a /dev/sd? on your drives and check the power on hours and power cycle count. (Not to mention the reallocated sector count and spin retry).
You would be surprised how many refurbs are actually fairly heavily used, with a lot of hours.
My current server's raid array is averaging 5.9 years, but has only seen 53 power cycles over that time. I actually tend to believe (without a great deal of evidence) that power cycles are harder on drives than running constantly.
Google actually did a similar study [googleusercontent.com] some years ago. Their study of over 100,000 drives largely agreed with the present study, right down to the three-node distribution of failures over time.
Re: (Score:3)
Re: (Score:3)
I think he's saying that if the drive has only been on the market for a couple of months, the wear-and-tear failures haven't had time to happen yet.
Re:Um.. (Score:5, Funny)
Re:Um.. (Score:5, Funny)
Who is General Failure anyway, and why does he keep trying to read my hard drive??
I'm sorry, that's classified. And the NSA categorically denies doing it.
Re: Um.. (Score:5, Funny)
Re: Um.. (Score:4)
Unconfirmed reports rumor that they've even shared the same POST.
Re: (Score:3)
You can always ask Major Domo...
Re: (Score:2)
Re:Um.. (Score:5, Funny)
For the last 4 years I've had to deal with WD RE2, RE3 and RE4 hard drives. Although they are enterprise sata hard drives, they seem to fail at a rate much worse than the consumer ones Backblaze based their report on. I see much fewer problems in the first year but they usually start dying when they reach 16000 power-on hours, with only about 40% exceeding 26000 hours.
Having said that, I count sector reallocation as a failure. In my experience, as soon as a disk has non-zero value in Reallocated_Sector_Ct and Reallocated_Event_Count, it usually fails completely within a few weeks or months.
Fortunately, WD has a tool on their website which you must run before they give you an RMA number. I managed to get its source code:
int main()
{
printf ("Disk OK, no errors found.");
return 0;
}
Re: (Score:3)
you have discovered the joy of vendor supplied diagnostic software. It is all designed to deny failure/replacement.
I had a dell system running horribly badly. I discovered the cause: the drive had wide spread errors and had remapped a good section of data that happened to be used by a VM. Run the VM and redirected reads brought the system to a crawl. It was somewhere in the thousands of reallocated sectors with thousands more pending and millions of redirected reads. SMART claimed the drive was good, all wh
Re: (Score:3)
We don't use the simple smart test, or the vendor's test. We either use the linux version of smartctl (smartctl -a /dev/sda )
or a third party one for windows.
By the way you have to find a way to get around the so called "raid controllers" that most manufacturers use on consumer grade machines, because it masks stuff that is happening at the hardware level. You need to talk to the drive directly, not to some fake-raid controller.
Re: (Score:3)
My Media Center - 2 drives:
0 Reallocated Sectors, Power on hours: 20,462
0 Reallocated Sectors, Power on hours: 26,487
Web/File Server - 4 drives:
0 Reallocated Sectors, Power on hours: 54,197
0 Reallocated Sectors, Power on hours: 35,074
0 Reallocated Sectors, Power on hours: 21,108
0 Reallocated Sectors, Power on hours: 21,114
Asterisk system - 1 drive:
0 Reallocated Sectors, Power on hours: 27,320
Desktop System - 1 drive:
0 Reallocated Sectors, Power on hours: 9,396
Re: (Score:3)
I've never had a hard drive fail on me, across 5 PC generations. I booted my old 486 a few months ago, one last time before disposing on it. Also no failure after ...21 years. Maybe I just got lucky though.
Re: (Score:2)
over the last 20 years i've used almost every brand of hard drive and have had all the brands fail at least once. every single brand has had quality issues at one time or another
Sooner or later all drives wear out. I usually lose 1 or 2 drives a year. I mostly buy Seagate. I liked them best when 7-year guarantees were common, but I've only had one Seagate actually fail within warranty.
Western Digital, on the other hand, is something I avoid. One project I worked on was seeing a 30% infant mortality rate. And that included the drive the sysadmins installed in my development system and then didn't bother to keep up the backup schedule on. Lost 2 weeks of work that way.
More recently,
Re:Um.. (Score:4, Informative)
I'm in the market for a new external hard drive (my 1TB one is getting too small for my backups) and kept looking at Seagate. Unfortunately, my father-in-law had a Seagate which broke rather quickly and my wife is convinced that this means all Seagate drives are junk. The reality is that Seagate, Western Digital, and any other large hard drive manufacturer is going to have a lot of failed drives by the sheer fact that they produce a lot of drives. Since people who are happy with their products don't post comments as often as people who aren't happy, you're likely to get a higher percentage of complaints in the reviews than percentage of people who actually experienced problems.
Re:Um.. (Score:5, Funny)
Maybe it's a CONSPIRACY in which they've invested ALL their manufacturing PRECISION into guaranteeing that the drives will fail precisely THREE DAYS after WARRANTY.
Consider this! You register for warranty, and you enter the purchase date, right? What if... WHAT IF... some FIRMWARE CODES in the drive pick up this transaction and STORE THE INFORMATION IN FLASH. Then then starting the day after warranty expiry the drive STARTS TO DO BAD THINGS f.e. not park properly or run just a little too slowly or maybe even there's like a secret drop of DESTRUCTION SAUCE which is released onto the platters at this time.
Anyway you see where I'm getting here? REPTILE OVERLORDS are conspiring with 9/11 truthers (yeah they're in on it! it's all a false flag operation) to destroy hard drives.
And this whole study.
Is.
SPONSORED BY A JEWISH-OWNED CORPORATION.
Yeah.
Re:Um.. (Score:4, Funny)
Not one connection to the NSA, or Snowden's ex-girlfriend, or the World Bank, or two employees at Infowars who spoke on the condition of anonymity to discuss their true jobs with the Bilderberg Group? FAIL.
Re: (Score:2)
SHUT UP MAN SHUT UP you mentioned Snowden's ex-girlfriend we're all fucked now DONT YOU SEE WHAT YOU'VE DONE
Re:Um.. (Score:5, Funny)
Re: (Score:2)
See? There's a rational explanation for everything.
Re: (Score:2)
They can't program the devices to fail on a specific day. That's stupid. They're just designed with a secret substance that reacts with that day's PLANNED CHEMTRAIL composition.
Re:Um.. (Score:4, Funny)
chemtrails don't exist, they're just soul shadows of the RUSSIAN WOODPECKER. now that was some hardcore shit.
Re: (Score:2)
Actually, they can and they do. [wikipedia.org]
Re: (Score:2)
You're clearly a disinfo agent. Reptilians combined with truthers make sure the drive start malfunctioning at a certain date by simply keeping a watch on the metadata written on the disk itself for created or updated files. For the few cases of 100% encrypted storage they rely on internal counters that officially do S.M.A.R.T. metering.
Re: (Score:2)
Maybe it's a CONSPIRACY in which they've invested ALL their manufacturing PRECISION into guaranteeing that the drives will fail precisely THREE DAYS after WARRANTY.
I believe it would be cheaper to simply make the drives fail using a random generator and a firmware routine. The effect would be the same but software solutions are usually cheaper.
Re: (Score:2)
And this folks is proof positive, that aluminum foil [healthcentral.com] is bad for you.
Re: (Score:2)
All these nonsensical ALL CAPS... could it mean...?
Oh shit! We're being invaded by Orz, and they're DANCING!
Re: (Score:2)
The Green drives are fine so long as you don't expect them to be as fast as 15k SCSI, except that I've had a very high failure rate on the 3TB model; that may just be bad luck, but I've yet to see the 1TB to 2TB models fail even after five years.
However, having read the article, I think I'll be replacing the 4-5 year old drives soon :).
Re: (Score:2)
Both my 1TB have developed a *LOT* of BS, they are about 4 years old. But thay are the only two WD that failed.
20% failure rate in 3 years is LOW? (Score:5, Insightful)
>> hard drives actually have a surprisingly low failure rate.
You call a 20% failure rate in 3 years LOW? My career rate is closer to 5% over 5 years - who keeps buying all those crappy hard drives?
Re: (Score:3)
i'm sure they have data on more hard drives than what you have handled
Re: (Score:2)
They have a lot of drives, but their data is only from 4 years. The article would be more meaningful if they had been gathering data for a longer time.rather than just resorting to crap like
"engineer, Brian Beach, speculates that the failure rate will probably stick to around 12% per year.
Re: (Score:3)
Re: (Score:2)
They use consumer hard drives not enterprise, They say themselves that this data probably does not really apply to ent drives. BB also uses a custom chassis that a lot of people would take issue with as far as potential vibration etc. That is a great deal different than a well engineered SAN or even server and affects wear and performance.
Re: (Score:2)
They use consumer hard drives not enterprise, They say themselves that this data probably does not really apply to ent drives. BB also uses a custom chassis that a lot of people would take issue with as far as potential vibration etc. That is a great deal different than a well engineered SAN or even server and affects wear and performance.
In other words, this is a typical Slashdot article with little or no meaningful information.
Re: (Score:2)
Yea it's rather much to call one company's statistics a study there is no comparisons etc made just raw stats.
Re: (Score:2)
Re: (Score:2)
They are just sharing data on there particular setup not actually testing anything. Backblaze loves to blog it's a marketing tool after all. There hardware really does not have any place outside of there market. Lets face it you can cram 48 raw TB into a 1ru with some actual processing power, ram and a decent interconnect. They are slightly less dense with very little CPU, ram, or interconnect.
Re:20% failure rate in 3 years is LOW? (Score:4, Insightful)
>> hard drives actually have a surprisingly low failure rate.
You call a 20% failure rate in 3 years LOW? My career rate is closer to 5% over 5 years - who keeps buying all those crappy hard drives?
They do have a slightly more harsh environment than your desktop. On for 24/7 to start... And in a box with a lot of other vibrating drives for another.
Re: (Score:2)
>> more harsh environment than your desktop
Ya' mean like my server room?
Gotta remember...some of us do work in IT for a living. :)
Re: (Score:2)
Which is why I've had to return more than a dozen Seagate drives under warranty in the last two years from one sixteen-bay server; however, they were all one of two very close models so I'm more inclined to believe it was just a bad batch or bad firmware than a larger issue with Seagate. Unfortunately, the higher-ups insists on replacing failed RAID drives with the same model/firmware.
Re: (Score:2)
Re: (Score:3)
That is what I was thinking.
When they said a "surprisingly low failure rate" I was thinking 20% failure rate in 10 year. (AKA outlasting the usable life of the computer)
But 3 years, with an average usable life span of 5 years means there is a more then an 1/5 chance that you will need a new drive isn't really that good.
Re:20% failure rate in 3 years is LOW? (Score:4, Interesting)
Careful. These are consumer grade drives. In other words, they're meant for use by typical consumers, where the disk spends 99.9999% of its time track following and running in a relatively low power state. But the folks who are using them are using them as enterprise drives, running 24/7 in racks with other drives, in a hot environment. Something that is very different from what they were designed for. Heat is the enemy of disk drives.
Honestly, if you want enterprise drives buy enterprise drives. These folks don't (too cheap on the initial cost so they'd rather pay on the backend?), so they get higher failure rates than "normal" folks do for their drives. This is like buying a Cobalt and going off-roading with it -- it'll work, but not for long before something breaks because it wasn't designed to be used that way.
Re:20% failure rate in 3 years is LOW? (Score:5, Insightful)
Careful. These are consumer grade drives. In other words, they're meant for use by typical consumers, where the disk spends 99.9999% of its time track following and running in a relatively low power state.
That would amount to about 32 seconds of activity per year.
There's more drive activity than that in a single Windows boot.
Stop making up numbers.
Re: (Score:3)
Honestly, if you want enterprise drives buy enterprise drives. These folks don't (too cheap on the initial cost so they'd rather pay on the backend?), so they get higher failure rates than "normal" folks do for their drives.
It might make good economical sense to buy "consumer" drives, if the price difference is enough.
Since they are using RAID to keep uptime and backups to prevent data loss, and don't need ultra-fast storage, the comparison would be between consumer and the "cheap" enterprise drives. Although you can now get drives the like WD Red for about a 5% premium over the WD Green, those are really slow drives. The WD Black vs. the WD RE line sees more like a 35% price difference with the same 5-year warranty.
That mea
Re: (Score:3)
>> hard drives actually have a surprisingly low failure rate.
You call a 20% failure rate in 3 years LOW? My career rate is closer to 5% over 5 years - who keeps buying all those crappy hard drives?
Apparently me, I've had 6 harddrives die just over a year of getting them over the last few years. And that is out of 8 drives total.
On the other hand, i have 20 years old SCSI drives that still run. 40mb drives, woot! =)
Re: (Score:2, Offtopic)
Re: Am I the only one (Score:3)
Re: (Score:2)
Wrong website for that.
Re: (Score:2, Funny)
Long hard drives are nice, but Tour golfers realize that accurate chipping and putting is for the dough.
Brands/temperatures/power cycling (Score:3)
Does a HD that is always on last for more or fewer hours? Ideal temperature? And a hard one to test, vibrations.
Re: (Score:2)
Re:Brands/temperatures/power cycling (Score:5, Insightful)
If you turn it off every night (when you go home from work) . . . it'll work fine, and last five years . . . then you're in the danger zone.
If you LEAVE IT ON for weeks at a time and NEVER turn it off . . . it'll work fine, and last five years . . . then you're in the danger zone.
What you NEVER want to do is . . . run it for a year (like at a factory plant) then turn it off for a week vacation. You're toast. (In my limited experience of 28 years) . . . if you turn it off that week . . . there is a 75% chance . . . it'll never turn on again.
I don't know if the "grease" settles, or the metal binds . . . I just know if its been on a year . . . don't turn it off for more than an hour or two if you want it to continue to work.
Re: (Score:3)
I'd believe that one for sure. I've had WD Blacks die after swearing by them the previous generation, a couple of the same model. Same with an office setup long ago, 25% failure in a year. No brand has held favor long enough to be useful info to me.
On a sadder note: My faithful Bigfoot drive failed to boot up this weekend, oh well, teenagers are sooo tempramental :(
Happier note: NOS OEM replacement in hand. LOL, long term planning was a tad longer term than expected but still....
Google's own study was 4 times larger (Score:5, Informative)
Re: (Score:3, Informative)
http://hardware.slashdot.org/story/07/02/18/0420247/google-releases-paper-on-disk-reliability [slashdot.org]
Google study was mentioned in backblaze's own blog on this subject [backblaze.com], the article misrepresents things a bit imo. Doing some more reading of their blog and when the floods hit Thailand they actually harvested harddrives from external drives (another blog-entry) [backblaze.com]; makes me think maybe those drives are crappier by default / endure worse treatment on the way from the factory to the consumer.
Re:Google's own study was 4 times larger (Score:5, Interesting)
They are, actually. They're often custom made for the purpose - because when you think about it - what's the point of a high speed hard drive when USB is the limiting factor?
USB mass storage doesn't support more than one outstanding request at a time, so features like NCQ and all that are pointless. Large caches were pointless in a world of USB 2.0 and the data can be pulled from the media faster than the interface (has there been any USB 2.0 hard drive that gets more than 20MB/sec transfer? That's less than half the theoretical... and most mechanisms can pull 40+MB/sec off the inner tracks). Likewise, there's no point putting high speed drives in there - the latency and seek times are pretty much the same, so 7200RPM vs 5400? No big difference.
And of course, they're popular and cheap and unless you can put value-add on there, people pay little, so the goal to make them really cheap is paramount. Heck, the later original Xboxes had 8GB drives that were bare bones cheap - Seagate got rid of a ton of bearings and other stuff.
Heck, in some USB3.0 drives, especially those by WD and Seagate, they don't use SATA anymore - the drive electronics speak USB 3.0 natively with onboard controllers.
No one else? (Score:3)
"Surprisingly, despite hard drives underpinning almost every aspect of modern computing (until smartphones), no one has ever carried out a study on the longevity of hard drives — or at least, no one has ever published results from such a study."
I recall reading a /. story from Google on THEIR experiences with hard drive longevity several years ago, over a much larger sampling of drives. Even linked to a PDF with the particulars....
Maybe they are to small to count, compared to an upstart backup company...
Re:No one else? (Score:5, Informative)
http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en/us/archive/disk_failures.pdf [googleusercontent.com]
"Failure Trends in a Large Disk Drive Population", dated 2007.
Re: (Score:2)
I think the difference is that the Google study used commercial hard drives, whereas this one, since it comes from the upstarts, is about consumer-grade drives. Of course, the conclusions are pretty much the same, which is good news for us ordinary plebs I suppose.
Re: (Score:2)
Only four years? (Score:5, Insightful)
Four years isn't long enough. Come back to us when you reach 6 or 8 years. The study looked at drives during the warranty period (WD drives have 5 year warranty).
Also the information they presented doesn't show that low of a failure rate.
Re: (Score:3)
Does anyone actually use drives in a commercial environment that are more than 3-4 years old? By the time they are that old they aren't worth the space they take up and the power they consume, i.e. 1TB per form factor as opposed to 3TB in the same form factor.
Re: (Score:2)
From my experience, the majority of servers don't need to expand their storage much over time. We have a few servers with beefy storage for databases/file shares/email and the rest of them store most of their data on a NAS or just don't work with an expanding set of data (terminal servers, application servers, print servers). The end result is that we have a lot of 6 and 8 year old servers still spinning most of their original disks. The servers we do expand the storage for usually have disks added, not rep
Re: (Score:2)
Four years isn't long enough. Come back to us when you reach 6 or 8 years. The study looked at drives during the warranty period (WD drives have 5 year warranty).
Also the information they presented doesn't show that low of a failure rate.
Yes indeed. Nobody should publish any data at all until the minimum time requirements of Bill_the_Engineer are met!
This is still interesting, and will get more so as more years are added on. (You did read the bit where they say they're going to keep updating the data, didn't you?)
Re: (Score:2)
How is it not long enough? It corroborates existing, known information - even 'best practice' of assuming drives are more likely to fail after 3 years, as well as that if a drive survives a year, they're likely to survive 3.
Things are mostly the same across the board. I'm not sure why anyone claiming 10% in the first year is 'low'.
20% is bad... (Score:5, Interesting)
99% of consumers have no backups and no raid, so 20% failure rate = 20% chance of losing EVERYTHING.
I call that an unacceptably high failure rate.
And note: I also have seen a 20% failure rate at home. Higher if I use the crap WD green drives.
Re: (Score:3)
I think what you mean is a 20% chance of having a teachable moment.
Re: (Score:3)
Re: (Score:2)
Doesn't even have to be a drive failure for data loss to occur. You accidentally deleted a file? Too bad.
To be taken with a grain of salt (Score:2)
Backblaze has done their study in their datacenter. This means they did it in a controlled environment. I'm sorry but I don't have an AC where my computer is... the air is not filtered. my PC is in my basement (as some people put it in a room) where theres 30-40% humidity using normal crappy air i breath like we all do. Some of us (not me) smoke and live in places with lots of humidity or dry air as well. Is this taken into account...nope.
Well this study is to be taken with a grain of salt as lots of varia
Re: (Score:2)
Useless study (Score:5, Insightful)
Re:Useless study (Score:5, Funny)
Need lube to get more statements out of your ass?
Re:top of the line Seagate drive (Score:2)
Re:Useless study (Score:5, Funny)
Since you apparently already have the statistics, why do you need theirs?
model number. study shows brand doesn't matter (Score:3)
The Google report based on many thousands of drives showed that while some MODEL NUMBERS had much higher failure, various brand names had similar failure rates. Western Digital will make two drives at the same time, one model that's very reliable while the one next to it is crap. Same with every other manufacturer.
http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en/us/archive/disk_failures.pdf [googleusercontent.com]
If you insist on buying based on the brand name, HGST models have been very
Re: (Score:3)
Re: (Score:2)
Next step (Score:5, Insightful)
this is consistent with my data... (Score:5, Interesting)
I worked at an on-line service for several years way back in the late 90s and early 00s and this data is consistent with the data I collected then over perhaps an order of magnitude more units. While 25K drives may not be a lot in the scale of today's internet services it is more than enough to draw statistically valid conclusions, as opposed to that, oh, 1 drive in your desktop gaming system that failed 1 day after the warranty expired.
Remember IBM Deskstar failures and correct stats (Score:2)
I remember that all my Deskstar drives failed [slashdot.org] after each other very soon...
Regarding those statistics, I think we should rule out some brand and model well known for failure, because, as soon as the information goes public, we need to replace them with some other brand/model.
With such strategy we can achieve a lower effective failure rate.
Re: (Score:2)
Seagate ST-225- 25 years old and still strong... (Score:2)
You're welcome.
Re:Seagate ST-225- 25 years old and still strong.. (Score:4, Funny)
This isn't surprising (Score:2)
This isn't surprising. To summarize: most early failures happen within the first year, and after 3 years, the survival rate drastically drops off.
This is a well-known phenomenon in IT storage, and it's why people will typically start replacing storage (or individual disks with any pre-fail signs) after 3 years.
That said, of the many disks I have still in service, most of them are older than 5 years, and I have some which are pushing 15 years old now without any concern of immediate failure. I've had pretty
Math (Score:2)
In the first phase, which lasts 1.5 years, hard drives have an annual failure rate of 5.1%. For the next 1.5 years, the annual failure rate drops to 1.4%. After three years, the failure rate explodes to 11.8% per year. In short, this means that around 92% of drives survive the first 18 months, and almost all of those (90%) then go on to reach three years.
Extrapolating from these figures, just under 80% of all hard drives will survive to their fourth anniversary.
1.00 (total) - .051 (failure rate for 1.5 years) = .949 (non-failure), but only 92% survive for 18 months (a.k.a. 1.5 years)? What?
My experience (Score:3, Insightful)
With my limited sample of hard drives (around 50 around the years), what I've found so far. The drives range from 1.2GB to 1TB models, SCSI/IDE/SATA
*ALL* but 1 or 2 of my Maxtors either died or sounded like a bandsaw pretty soon
My Seagates are all dead save 1 or 2
My WD seem fine, albeit some are noisy, but my two 1TB green pulled from external cases are pretty much about dead.
I've had only 1 out of 10 SCSI drive die so far.
So my experience so is Maxtor was crap, when Seagate bought them it lowered Seagate's reliability. And since *ALL* the drives I've pulled from enclosures are dead, I'm guessing they are selling their crappiest drives to other manufacturers.
The problem is they are not trying to make better drives, they are trying to make *bigger* drives. Fuck a 4TB drive, gimme a reliable 1TB.
All my obsolete hard drives were dismantled and recycled, and from what I saw, the more recent the drive, the cheaper it's made (and less reliable)
I should've kept statistics while dismantling them.
Re: (Score:2)
If the real thing don't do the trick, you better make up something quick. You gonna burn into the wick, aren't you?
Re: (Score:2)
Re:A study by BackBlaze (Score:4, Insightful)
These are the same stupid fucks that use rubber bands around hard drives in their "SAN" storage.
Given that anything remotely serious is based on the premise that you can't trust your hard drives, is a strategy that makes your HDDs incrementally less trustworthy; but much cheaper, actually 'stupid'?
I wouldn't want to use BackBlaze's 'Pods' on a small scale; because part of their low cost is achieved by moving all the redundancy, fault tolerance, etc. into software (and, for a small shop, paying a bit more for fancy hardware that handles that, along with backups, is cheaper than having a software guru on hand); but on a large scale, making the amount of 'overhead' (ie. dollars worth of hardware purchased to support each disk) as low as possible, and just using software (with its high up-front cost; but zero cost to copy an arbitrary number of times) seems pretty reasonable.
Now, if their arrangement was so dodgy that it was actively murdering drives, that'd be another story; but its thermals and electrical supply are good enough that the drives inside get to fail, or not, the same as though they were in any other enclosure, and these enclosures are crazy cheap, so why not?
Re: (Score:3)
When I buy a new hard drive, I test it with badblocks, which nowadays seems to take about a week. Something like 20% of the hard drives fail during testing immediately after purchase. Of course they go straight back to the store when this happens.