BackBlaze's Hard Drive Stats for Q2 2017 (backblaze.com) 99
BackBlaze is back with its new hard drive reliability report: Since our last report for Q1 2017, we have added 635 additional hard drives to bring us to the 83,151 drives we'll focus on. We'll begin our review by looking at the statistics for the period of April 1, 2017 through June 30, 2017 (Q2 2017). [...] When looking at the quarterly numbers, remember to look for those drives with at least 50,000 drive hours for the quarter. That works out to about 550 drives running the entire quarter. That's a good sample size. If the sample size is below that, the failure rates can be skewed based on a small change in the number of drive failures.
Editor's note: In short: hard drives from HGST, a subsidiary of Western Digital, and Toshiba were far more reliable than those from Seagate across the models BackBlaze uses in its datacenters.
Editor's note: In short: hard drives from HGST, a subsidiary of Western Digital, and Toshiba were far more reliable than those from Seagate across the models BackBlaze uses in its datacenters.
HGST and Toshiba have been at the top for years (Score:5, Informative)
Editor's note: In short: hard drives from HGST, a subsidiary of Western Digital, and Toshiba were far more reliable than those from Seagate
Which has been the case since BackBlaze started releasing it's reliability numbers, aside from a few instances where a specific model of Seagate performed unusually well.
Re: (Score:3)
Re: (Score:2)
HGST has come a long way since the Deathstar [wikipedia.org] days.
From your own link HGST never manufactured a "Deathstar". That was entirely IBM's doing.
Re: (Score:3)
From your own link HGST never manufactured a "Deathstar". That was entirely IBM's doing.
"The line was continued by Hitachi when in 2003 it bought IBM's hard disk drive division and renamed it Hitachi Global Storage Technologies. In 2012 Hitachi sold the division to Western Digital who rebranded it as HGST."
Hitachi didn't buy just the IP, they bought the division. That typically means all of the IP, manufacturing facilities, engineering, and business operations related to a product line. It's likely that there are a few people that worked on the "deathstar" that still work for HGST today.
Re: (Score:1)
Also, the deathstars were all the "IDE" consumer drives. The SCSI professional drives were quite good. It depended where the platters were sputtered and the electronics were made. I worked at the IBM facility in San Jose a few years before Hitachi bought the storage division...
Re: (Score:3)
and yet, they continue buying far more Seagate than anything else.
The reason is that they are cheaper, and it's not worth it to pay a premium for other brands.
On amazon.ca, 4TB Seagate is $140, Western Digital is $157.
For external (don't understand why, but it's cheaper) 8TB, Seagate is $235 and Western is $280
Re: (Score:2)
No legit vendor would do this because passing off referb parts as new in most western countries is completely illegal.
Only if they're sold as "new." It's generally legal to resell them so long as you indicate that they're refurbished products.
Re: (Score:2)
The premium is not that small. The cheapest HGST is 21% more expensive than the cheapest Seagate I could find (4TB capacity)
Re: (Score:2)
And how much is your peace of mind worth? Or your time spent cleaning up after that Seagate drive blows up on you?
Good components cost more up front, but save you time and energy over their lifespan. I'm IT for a satellite office of a US multinational and unlike the parent company our workstations, dev servers and datacenter servers are all spec'd and scratchbuilt by me with decent components instead of being cheap and cheerful mass purchased crap from the 2nd lowest bidder. As a result my equipment fail
Re: (Score:2)
At these scales, a few percentage points of failure doesn't really matter if you save 25% cost. Even at relatively small densities of 100TB-1PB, 20% in disk cost savings is significant.
Additionally, you should plan for failure anyway, disks fail, regardless of manufacturer. So buying a 'more reliable' drive is no guarantee and good backups are still a good idea.
Re: (Score:2)
Re: (Score:2)
I don't buy components for work from a mail order place, I have a hardware guy who runs his own company and he delivers after he picks it up from his reseller's warehouse in town - the same warehouses that supply all those mail order shops. When I get hard drives from him I am usually getting them in bulk (10+) and each are still in the styrofoam racking that he got from the warehouse when he picks them up.
But for one offs or home use you are correct, better packaging means longer lived parts with moving c
Re: HGST and Toshiba have been at the top for year (Score:2)
Re: (Score:2)
They've said in the past that there are alot of factors that go into it. For example, the Seagates may fail more often, but they're also cheaper. And sometimes it comes down to simple availability. If they can't get the HGST drives in the capacity and quantity they want, well, they're a data storage company, it's not like they're *not* going to buy hard drives, even if they are more likely to fail.
Re: (Score:2)
When I had to go into the hospital for an extended period I decided that the price advantage was no longer worthwhile. I was just replacing the buggers (Seagate) too often. I was no longer in the position to "baby" my arrays.
So I dumped all my Seagate drives for WD.
Those Amazon prices of yours really don't reflect a discount sufficient enough to justify the extra bother.
Re: (Score:1)
Re: (Score:3)
Editor's note: In short: hard drives from HGST, a subsidiary of Western Digital, and Toshiba were far more reliable than those from Seagate
I'm not sure that you can reach that conclusion from their data for the latest generation. The STD4000s were definitely hot garbage and the HGST 4TB were fantastic. But none of those drives are still on the market.
The Seagate STD8000s they are running have a combined 1.2M drive days with 38 failiures. That's 1 average failure every 32k days. They also have around 1.75 failures per thousand.
The WDC60 drives only have 40k days on them so they're still in the ballpark of similar performance. With 443 dr
Re: (Score:2)
With an average of less than two months' use per drive, the current Seagate drives showed a 1.55% failure rate over three months, or a 6.2% expected annual failure rate, assuming all else is equal. With an average of five days' use per drive, the current HGST drives showed a 0% failure rate over
Re: (Score:2)
Re: (Score:2)
Tradition. This is the same company that shipped a warehouse full of known bad drives to market, as an accounting trick, back at the dawn of personal computers.
The same name anyhow.
Re: (Score:2)
There was a time when Seagate gave a much longer warranty than their competitors and the drives actually lasted longer.
Sadly, those days are long gone.
Re: (Score:2)
Re: (Score:1)
Perhaps you are thinking of Miniscribe, they shipped actual bricks in boxes prior to them going out of business.
https://en.wikipedia.org/wiki/MiniScribe
Re:hard drives from HGST ... far more reliable (Score:5, Interesting)
Seagate. Back in 20 meg days. It was around the first time they went bankrupt.
They got caught carrying a warehouse full of test failures as an asset. When caught by the auditors they doubled down and shipped them.
Re: (Score:3)
Tradition. This is the same company that shipped a warehouse full of known bad drives to market, as an accounting trick, back at the dawn of personal computers.
Wow, I worked for a company that boxed and "shipped" a pile of equipment to a host of resellers who hadn't ordered anything. The shipping terms where FOB-Origin so the resellers where responsible for picking up the equipment, which of course they didn't, because they didn't order it. Then, when the resellers got their invoices and complained that they didn't order all this stuff it got "returned" for a refund. Of course the "sell" date was in one quarter and the "returns" happened in the following one.
Re: (Score:2)
'Stuffing the channel' is common enough to have a cute name. In very limited cases, it's even legal. But consult a shyster.
Re: (Score:2)
I stopped buying Seagate drives years ago. They still suck?
They did several years ago. Since then I replaced my Seagate drives with SSDs in the gaming PC and laptop, and Western Digital 1TB Red NAS drives [amzn.to] in the file server. Although I did get a newer Seagate 3TB hard drive [amzn.to] to serve as a backup drive for the file server. No problems with that drive yet.
Re: (Score:2)
Maybe creimer has a hard drive in his head too?
As a rule of thumb, I don't bother to memorize stuff. Specialized knowledge should always be documented. If not documented, I write the documentation and stick it into a knowledge base.
Re: (Score:2)
Some of remember the old IBM Deathstar drives.
https://en.wikipedia.org/wiki/... [wikipedia.org]
IBM sold their hard drive business and it has become HGST now. Weird how that is now reliable brand.
Re: (Score:2)
Re: (Score:3)
Corporate apologist shill alert. No one falls for that that old lie any more. The difference between "consumer" and "enterprise" drives is a different label and a huge price gouge. Nothing more. Actually 24x7 operation is a lot easier on them than constant cycling on/off.
Re: (Score:2)
BackBlaze already did this in an episode. They have proven that there is no statistical difference between Consumer or Enterprise grade. In fact, IIRC, the Enterprise Drives were slightly worse (but within statistical variance).
The reason why Enterprise are made, is for PHBs who see "Enterprise" and think ... "BETTER!!!!"
The sad thing is, that kind of crap marketing actually works.
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
They give an example involving 15 drives whereby two failures in 100 days gives a daily failure rate of 2%. (Two percent of what?)
Good question. If two failures in 15 drives in 100 days = 2%, what would be the failure rate if two drives out of, say, 150 drives failed in 100 days? What if three drives failed in 101 days?
Re: Three kinds of shit. (Score:2)
Re: (Score:2)
Hitachi sells enterprise SAN's with their drives in them. Huge financial inducement to charge huge support fees and make your product better so as not to pay out.
Re: (Score:2)
Please consider using SSD drives in your NAS. Results are optimal.
As soon as a 1TB SSD becomes available for $50 each. I just replaced all my NAS hard drives last year. I'm hoping to replace those in five years with SSDs.
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
A NAS isn't nearly fast enough for this to even matter. The N part of the whole thing will be your big bottleneck. Even the random read/write access pattern where SSD shines will be trashed by the fact that you're connecting across the network.
A single piece of spinning rust can saturate gigabit on it's own.
Re:Bathtub model (Score:5, Informative)
I have a stack of dead 2TB+ Seagate drives sitting around here.
Never again. They merged with Maxtor and the worst of both companies emerged out of the ooze.
Re: (Score:2)
There was a time when I would have considered a Seagate drive but when they bought Maxtor, I swore never again. I don't think I would buy one even for someone else's computer.
Seagate drives will have you singing... (Score:5, Funny)
...that old R.E.M. song "That's me on the hard drive, losing my partition"
Re: (Score:2, Funny)
I thought that I heard you crashing
I thought that I heard a ding
I think I thought I heard you dying
Re: (Score:3)
...that old R.E.M. song "That's me on the hard drive, losing my partition"
Back in the early 90s we joked "That's me on the Conner". I guess these days, few people know or remember Conner hard drives and their dubious reliability, so the update make sense! :)
Re: (Score:2)
And then later, Seagate doubled down and bought Maxtor.
Be sure, your RAID has a mixture (Score:3)
For a single drive, go with the most reliable model. For a RAID, however, be sure to mix different manufacturers, models, and batches to avoid correlated failures...
Because, if the failures are random, your mirror or even a large-count RAID5 will do fine for millennia [algebra.com], assuming you replace the failing ones in a reasonable time.
But if the drives are all the same, they may all — after spending the same time in the same enclosure under the same load — fail for the same reason at the same time. Having hot-spares or multiple redundancy will not help you...
Re: (Score:3)
Hot spare drives DO help, as long as you don't leave them spinning when they are in standby. It allows you to restore redundancy on a detectable failure and return your system to a "dual failure required for data loss" condition.
Your point about not using drives from the same manufacturing run in a RAID array is somewhat valid, in that it can increase the possibility of having multiple drives fail at similar times, but if you *monitor* your drives, many failures are evident before they become catastrophic
Re: (Score:3)
Having a hot spare will cut your drive-replacement time by no more than a few hours. That's usually no more than a fraction of the actual rebuild time and, as my graph shows, that does not perceptibly affect the RAID's total MTTF. Moreover, if you monitor the drives — as you r
Re: (Score:2)
I think we are going to disagree about the hot standby, but in truth it boils down to what your response time to a drive failure is. I've had systems that I was responsible for that I couldn't lay hands on for at least 24 hours or more (They where in other countries). Granted it wasn't an ideal situation and we did have limited onsite support, I found that having a hot spare (as well as one or more on the shelf) to be useful to the MTBF of the system. My biggest problem was getting the replacement drive p
Re: (Score:2)
then yeah, you may as well put the spare drive into the otherwise empty slot instead of on the shelf.
Re: (Score:1)
Had a job where one time we installed a dozen or so low end RAID disk arrays. They had 8 slots but only 4 populated. So, RAID-5 was our only real choice, (given the application space requirement). The array vendor did supply a mixture of disk drive manufacturers, but it still did not help.
We ended up with a lemon model, the IBM 3.5" 9GB SCSI. They started dying. Whence the root cause was determined, (bad batch of drives), we asked our array vendor to replace them all. Perhaps 40 more dis
Re: (Score:2)
Large count RAID5 WILL NOT DO FINE (trust me). Your RAID5 takes exponentially more time for each drive you add and during that time your data is in a RAID0 situation. RAID with at least 2 parity drives is the minimum requirement, regular mirrors if you have failover systems or triple mirrors if you're in a SAN-situation.
Re: (Score:2)
The larger the count, the bigger the risk, yes. And yet, if the drives are all different, it will still do fine for thousands of years. No, I will not just "trust you". You may have some personal anecdote to "prove" it, but the math I referred to speaks for itself.
Wasteful bullshit. All too common among Infrastructure people (who never even studied Statistics, much less got a decent grade), but still b
Re: (Score:2)
Even a single anecdote would disprove your theory of 'thousands of years'. There is no such thing as 'thousands of years' of runtime on a drive, you're talking MEAN time BETWEEN failures (or MTTDL, mean time to data loss) and even then you have to account for all the drive configurations in existence, in an ideal world.
You can do the calculations, go ahead, there are calculators on the Internet for you. There used to be an Excel spreadsheet from a Sun engineer a long time ago, but
I'm sure you won't understa
Re: (Score:2)
It is not a "theory", I offer a mathematical proof [algebra.com]. You, on the other hand, would not offer even an anecdote.
Not on a drive, but for an array — a RAID5 with disks failing randomly will survive for millennia before two unrelated failures happen within the period of replace/rebuild-time of each other.
Ah, ho
Re: (Score:2)
Again, you don't understand the mathematics involved, linking to your own (bad) calculation is not proof. This calculator explains it much better than I can: http://wintelguy.com/raidmttdl... [wintelguy.com]
RAID5: 10 drive groups of 8 drives, 6TB drives:
MTTDL: 1.2years
Probability of data loss is more than 50% in a year
MTTDL due to multiple disk failures: 286 years
Do you understand how incredibly low a MTTDL of 286 years is?
Just swapping it over to RAID6 with the same settings and the probability of data loss is 1% over 10
Re: (Score:2)
It is proof, unless an error is identified. Math has this nice property about it, that it is not subject to opinion. What is bad about them?
Because you do not understand the Math...
I never claimed, that RAID5 does better. My claim is, it is perfectly sufficient and that the higher redundancy does not improve reliab
Re: (Score:2)
Okay, quick scanning of your site:
The entire section of Poisson, having more drives does not make it less likely that one will fail.
"Failures occur every mttf hours" - that's not what MTTF means. MTTF simply means that if you have a collection of "n" drives, that you can statistically expect 1 failure every (MTTF / n). So if you have a manufacturer says, your MTTF is 100 years (which is about what they promise), then you will have a failure amongst a set of ~1M drives every 1 hour.
Poisson's distribution sim
Summary of the summary (Score:2)
Seagate is not reliable.
Apple for the win (Score:5, Funny)
It's sad that Windows and Linux users have to go to such troubles.
Me? I only buy Macs. Because Apple takes the 1% of the best drives made in each manufacturing lot and puts them in their Macs. That's why Macs are so expensive.
I mean, this has to be the reason, right? Surely they're not just buying the same parts as Dell and others and just selling overpriced computers and pocketing the profits.
Re: (Score:2)
> It's sad that Windows and Linux users have to go to such troubles.
Why do you have this deranged idea that PC users are forced to buy any particular brand of hard drive? We can buy any brand we like.
Re: (Score:2)
My joke was that you had to do any research at all and that Apple, being costly, had the best 1% of hardware and PC peasants had to buy the 99% rejects.
Hard disks ? (Score:2)
Hard disks are so old fashioned, why don't they use the Cloud ?
Buy HGST right away (Score:2)
BackBlaze is about to put in a behemothic order as it gets ready to take on all the CrashPlan customers.
WTF? (Score:2)
Seagate stats don't make any sense: ST4000DM001 - 400 - 5 - makes it 1.25% failure rate - I see 30.43% in the table. Likewise with ST4000DX000.
Could anyone explain how the f they calculated Seagate data?
Re: (Score:2)