6 Terabyte Hard Drive Round-Up: WD Red, WD Green and Seagate Enterprise 6TB 190
MojoKid writes The hard drive market has become a lot less sexy in the past few years thanks to SSDs. What we used to consider "fast" for a hard drive is relatively slow compared to even the cheapest of today's solid state drives. But there are two areas where hard drives still rule the roost, and that's overall capacity and cost per gigabyte. Since most of us still need a hard drive for bulk storage, the question naturally becomes, "how big of a drive do you need?" For a while, 4TB drives were the top end of what was available in the market but recently Seagate, HGST, and Western Digital announced breakthroughs in areal density and other technologies, that enabled the advent of the 6 Terabyte hard drive. This round-up looks at three offerings in the market currently, with a WD Red 6TB drive, WD Green and a Seagate 6TB Enterprise class model. Though the WD drives only sport a 5400RPM spindle speed, due to their increased areal density of 1TB platters, they're still able to put up respectable performance. Though the Seagate Enterprise Capacity 6TB (also known as the Constellation ES series) drive offers the best performance at 7200 RPM, it comes at nearly a $200 price premium. Still, at anywhere from .04 to .07 per GiB, you can't beat the bulk storage value of these new high capacity 6TB HDDs.
Awfully long summary to say "you can haz 6TB HD" (Score:5, Funny)
Awfully long summary to say "you can haz 6TB HD"
Who cares about rotational speed these days? (Score:3, Insightful)
Is anyone with significant amounts of data not caching their frequently accessed data on SSD? Rotational is still about 8x cheaper than SSD these days, but the days of rotational speed for cold data are numbered. Storage is easily abstracted so it's not a legacy concern. A lot of shops I know have already invested in a complete switchover to full-SSD (we're talking racks of SSD) with tape backup.
Even my home file server uses two tiny second gen 64gb SSDs for read/write caching for ~20TB of data. I just buy the cheapest, biggest rotational drive whenever I start running out of room. When the price on those new Seagate 8TB drives (currently $230) drops to under $150 I will probably start swapping out my oldest 2TB drives to avoid having to upgrade the case in this decade.
Re:Who cares about rotational speed these days? (Score:5, Funny)
Is anyone with significant amounts of data not caching their frequently accessed data on SSD?
*looks around*
*sheepishly half-raises hand*
Re:Who cares about rotational speed these days? (Score:4, Insightful)
OK, I have a 5TB RAID array 50% full of music and a 3TB (soon to be upgraded to 4) full of videos.
These drives run quite fast enough for me to stream their contents - why would I want to cache them onto an SSD?
So I'm raising my hand but not sheepishly.
Re: (Score:2)
Re: (Score:2)
Similar here. All my "media" is on spinning disk, and it's entirely fit for the purpose. I use WD enterprise drives just to reduce the chance of an annoying failure (they're overpriced, really, but I freaking hate drive failures).
Sure, boot drive, personal stuff, home software projects, anything but music and videos, goes on SSD, but that's maybe 5% of my storage.
Re:Who cares about rotational speed these days? (Score:4, Interesting)
Re:Who cares about rotational speed these days? (Score:5, Informative)
Replace bay 1 with a SATA board that can hold 4 SSD drive cards. It's what I did. OS and cache in bay one and 3 bays for 3 6TB drives. works great.
http://www.amazon.com/SATA-Dua... [amazon.com]
Dual port version. I found a 4 port version and have it stuffed with 4 128gb SSD drives. works great.
Re: (Score:2)
Ugh, RAID5 with 7k drives, that's just asking for data loss.
Re: (Score:2)
is your recommendation valid for RAID 1 as well? I am just curious...
-Thanks,
Re: (Score:2)
RAID1/0 is fine if your upper level can do parity checks, but if you can't rely on an upper layer than RAID6 is best. Of course folks looking out a bit are saying that even RAID6 or similar dual parity schemes will become insufficient and so there's intense interest in newer coding schemes like rateless erasure codes, but I'm not sure those will ever scale down to the SOHO level other than through the use of cloud services. At enterprise scales I'm using RAID5 raidlets with advanced layouts that allow for e
Re: (Score:2)
Re: (Score:2)
When you have only 4 SATA ports on the micro server it doesn't matter if you have 600 drive bays.
Re: Who cares about rotational speed these days? (Score:2)
I have several microservers and all have 2 ssds (write and read cache) + 4hdds. I use USB stick for OS, cdrom's sata cable for one SSD and esata-to-sata cable to connect another SSD to the esata port. You will need a hacked BIOS to enable AHCI on all ports. Just search the web.
Re: (Score:2)
I've got 10 drives (6x2.5 + 4x3.5) in one of my Microservers.
Unfortunately ZFS shows up the weak CPU under heavy load, but most of the time (with an additional dual-port ethernet card as well) it's a real trooper.
Re: (Score:2)
AIUI the microservers in question have five SATA ports and one eSATA port on the motherboard. They also have a PCIe slot that you can use.
http://www.icydock.com/icy_tip... [icydock.com]
Looks like you have to watch your models though, it seems the latest generation have moved to using slimine optical drives :(
Re: (Score:2)
Even for home-based use, these big HDDs are increasingly being relegated to little more than mass media storage (oftentimes NAS-based), while SSDs are taking over everything else. Caching or not, rotational speeds (and the seek times they affect) end up being non-factors for a home user when all the drives are used to do is deliver video or audio content, particularly so if they're connecting to it over a LAN, since they'll in many cases spend orders of magnitude more (yet still not much) time buffering the
Re: (Score:2)
Even for home-based use, these big HDDs are increasingly being relegated to little more than mass media storage (oftentimes NAS-based), while SSDs are taking over everything else.
[citation needed]
I don't know what universe you live in, but unless you're talking about laptops/mobile, very few mass-market systems are being shipped with only an SSD. SSDs won't take over for many many years, unless there's a big change in how fast they catch up on price per GB. Mass market systems will continue to ship with one drive, and that drive will be a spinner for years to come. People into computers will certainly continue to ADD an SSD, but we're a small minority.
Re: (Score:2)
Most notably, Windows doesn't support it very well. Yeah, you can manually 'cache' data by installing application X on the SSD and storing the porn torrents on the HDD; but that gets to be a pain in the ass, quickly, for everything except the 'SSD large enough for all programs, HDD for media library' arrangement. From time to time a vendor will bodge something on(Intel's 'Sma
Re: (Score:2)
It would take 11 hours to fully mirror from one 6 TByte WD drive to another, if your system can actually manage to sustain 138Mbytes per second as shown on page 5 of the article. Obviously, the transfer will be slower, if the data is actually used for something.
If a disk dies, at best you are looking at half a day before the system is fully redundant agai
Re: (Score:2)
Re: (Score:2)
> RAID5 is great and all, but once a hard drive fails and you go non-redundant, waiting for the array to rebuild and hoping no other drive goes bad in the meantime is quite stressful.
Not if you have more than one copy.
RAID is no replacement for backups.
Re: (Score:2)
Arrrghhhh!!!!....
Actually, RAID can be used to speed up access and/or to survive a disk failure (depending on setup). While important in case of major disaster, restoring from backup generally knocks out the service altogether, while a simple (and fairly common in large data centers) disk failure wouldn't even be noticed by anyone but a system admin with a RAID designed to tolerate it.
Re: (Score:2)
That's why for larger systems you should use multiply redundant arrays. For example, RAID6 or 3 way mirroring. That way you can cover the increasingly probable case of losing a disk while the re-construction is in progress. It also becomes increasingly important to use drives from different batches and preferably different ages.
It's also helpful to have spares on-hand. I would like to see a concept of warm spares where the designated spares do not get powered except for periodic testing and when actually re
Re: (Score:2)
I'm using UnRaid and backing everything up to a second server that's kept offline and off-site so I can't have my collection destroyed by malware, theft, or a fire. It was a bit costly, but my data is secure enough,.
Re: (Score:2)
With how slow drives are, relative to their capacity, RAID-6 or RAID-Z2 are a must, not just for handling a disk failure during the time where the array is degraded and rebuilding from a hot spare, but for finding and fixing bit rot. Bit rot is not related to parity checking, and ideally, should be looked for at the filesystem level.
Re: (Score:2)
There are a number of workloads where caching is not so useful. For example, video conversion or 'big data' analysis where you are streaming the inputs. At that point, an SSD is more of an intermediate buffer than it is a cache (so only helpful for writing). If your use pattern streams more data out than the size of the SSD, then it's only getting in the way.
In a file server, unless you are using multiple gigE or faster interfaces, having plenty of RAM will make a much bigger difference than SSDs will.
Re: (Score:2)
do hybrid drives and custom NAS boxes count?
Re: (Score:2)
> Even my home file server uses two tiny second gen 64gb SSDs for read/write caching for ~20TB of data.
Did you configure this manually or just used something off the shelf? What is setup to accomplish this?
-Thanks,
Re: (Score:2)
I'm using Windows Hyper-V Server 2012 R2 (that's HYPER-V SERVER, not "Server", it's free) and then a bunch of command line commands. Do a google search for "ssd tiering write-back cache". Works great on my haswell era home VM lab. 6 rotational 2TB hard drives and 2x4TB hard drives + 2 64GB SSDs I got cheap from a buddy.
Technically you could do this in Windows 8 if it weren't for artificial limitations. Clever dll usage can get it to work but it's best to just use Hyper-V Server 2012 R2 which is free
Re: (Score:2)
Re: (Score:2)
Yes. Memory is even better, but of course after a point it gets stupidly expensive compared with an SSD, so it depends on the volume of frequently accessed data. Even swap/cache/l2arc on SSD is still a vast amount slower than caching in memory.
Re: (Score:2)
Is anyone with significant amounts of data not caching their frequently accessed data on SSD?
Poor people who can't afford an SSD? Being mostly employed, middle class people here, or talking about business instead of home use, you guys still seem to forget that SSDs are still the Lexus' of the HD world (with PCIx ssds being the ferraris).
I can barely meet my storage needs, so on the rare occasion I have $100-200 to spend on drives (maybe once per year), I have to add as much space as I can. Already have 10 different drives between my tower and a 4-bay NAS I got lucky and found in the trash, 250GBx
You'd be nuts (Score:3, Funny)
You'd be nuts to trust your porn stash to a 6TB consumer drive right now. Buy two 4TB drives, and back that stuff up. Give the 6TBs a year or so to see if there are any reliability issues with these capacities, and for the price to drop a bit.
Re: (Score:2)
That's why I celebrate the arrival of the 6 TB drives. They really brought the price of the 4's down.
Buy two... (Score:3)
Re: (Score:3)
> You realistically can't backup 6TB worth of data
Sure you can. Just get another drive. Redundancy and backup strategies haven't changed just because drives are bigger. If anything, you have a bit of an advantage now as overall drive prices are lower (even on the high end).
Thanks to Seagate, I have tested this very procedure several times over the last year.
Re: (Score:2)
I solve the long term data integrity problem by doing nightly snapshot delta's of my whole machine and my wife's machine (to a rasp pi with an external drive at a buddies house). Granted that's a single point of failure, but it's out of house in case my house {burns down, get's robbed, etc}
However, that doesn't fix the near term issue of me busily working away on a project when boo
Re: (Score:2)
Recreating my machine from install media is really not that gruesome of a prospect. Then again, I don't run the kind of OS that makes a naieve sort of backup of one's user files a problematic nightmare requiring special arcane tools to deal with.
For the small stuff, I would rather use extra SATA ports (if I have any) for load balancing IO.
It's the mountains of multimedia data accumulated over 20+ years that worries me. Now rebuilding that from the original media would take awhile.
Backing up 6TB is no proble
Re:Buy two... (Score:4, Interesting)
Re: (Score:2)
... or you could set up ZFS with a mirrored vdev and keep snapshots. All the benefits of RAID1, combined with all the benefits of keeping any number of sync'ed disks laying around. If you have many disks, go with RAIDZ and get the reliability of RAID5 too.
If you store lots of data, once you ZFS you'll never want to go back.
Re: (Score:2)
How do you recover last Tuesdays file if you already synced the deletion??
By looking at last Tuesday's delta.
Rolling snapshots, ftw!
Re: (Score:2)
Take that second drive.
Put it in a USB enclosure.
Run a backup once a week.
Much less wear-and-tear on the drives. No big deal if something drops in your computer and shorts the 12V line, or you get water in it, or something else happens to the computer / SATA itself.
Also, you can then even do one full and multiple differential backups assuming you're not jamming the drive to capacity (handy if you suddenly discover that the thing you did last week was stupid and has corrupted your older data).
Live RAID is n
Re: (Score:3)
You realistically can't backup 6TB worth of data
Sure you can, we backup over 10x that every weekend.
Re: (Score:2)
Re: (Score:2)
and
Please don't take this the wrong way, but you don't know what you're talking about. RAID is not a backup, and backup is not RAID.
RAID is keeping going with a hard drive failure. Backup is where you can recover any file in your backup time frame. If in a RAID configuration you delete a file and suddenly realize three months later that you really should have kept it, you're out of luck. If your OS decides to crap garbage all over your disk, RAID will faithfully mirror that garbage for you. It
Re: (Score:2)
My 5-ish TB of data over at Crashplan begs to differ (and yes, I have a local copy as well).
Mirrored drives are not a good idea for data protection - for one thing an accidental delete (or overwrite, or ransomware, or whatever) will take your data out completely and instantly. Much better to do incremental backups at the file level, so you can restore deleted or damaged files from whenever you want in their history. Even if you don't want to pay for the cloud service, the crashplan software will do this ver
Re: (Score:2)
RAID itself isn't a backup. However, using multiple disks with btrfs or ZFS is approaching it. They address the case of most pilot error failures as long as you actually make snapshots (no more rm -rf disasters) that prevented RAID from being the answer. You are still left with a few disastrous failure modes like the power supply blowing up just wrong and putting AC line voltage on both drives or a fire, but it is approaching.
You can do cross backups between two machines to eliminate the power supply failur
Re: (Score:2)
Nothing that's still inside the case is "backup". Sure, snapshots are useful for when you say "oops", and that's good to have, but still.
For home use, I regularly backup 6 TB of data by just copying the data to extra drives, and carrying those drives to work. That way if someone breaks in and steals everything that's not nailed down, I'm good.
For work, we're trying to move away from even using RAID. Once everything's on multiple servers, and you're provisioned to survive (and recover from) both server a
To save you the click through trouble... (Score:5, Informative)
Fastest: Seagate.
Best Warranty: Seagate.
Best Cache: WD Red....or the Seagate...the article conflicts between the first two pages.
Cheapest: WD Green.
Seagate notables: Full drive encryption available at a firmware level. AF and Legacy disks are separate models.
WD Red notables: 5400RPM spindle speed.
WD Green notables: None - nothing distinguishable from the Red drive, except a shorter warranty.
Sandra Benchmark results:
Seagate: 167W/168R.
WD Red: 138W/138R.
WD Green: 133W/133R.
Atto results are shown on a messy graph with no clear numbers, but Seagate wins that benchmark as well (albeit with a closer delta).
HD Tune Pro results basically reflect the transfer rates from above. Seek times for the Seagate are 11ms for both write and read, with the WD Red having a 16/17 set of scores and the WD Green being less than an integer higher. Burst rates are again better on the Seagate (276R/304W), with the WD Green being 217/220 and the Red being 217/218.
Crystal mark, basically the same numbers.
Futuremark, prettier graphs with wonderful titles like "video editing" and "importing pictures", with the results a closer race, each drive having its own task at which it wins (even the green). Not much different from the 3TB numbers, and not that much different from each other.
There were no mentions of reliability metrics; presumably none of the disks failed during benchmarking. Consult your usual biases and experience regarding which drive is likely to fail or not - this was strictly a benchmark review, and shockingly, the enterprise-grade drive with the highest rotational speed and biggest cache that costs the most money got the best score.
Re: (Score:3, Interesting)
There are some useful bits in the blog post by Backblaze [backblaze.com], as they care a lot about making a good choice between the two 6TB drives.
Re: (Score:2)
For their tests they note that the WD Red uses slightly less energy (which is important to them, when they have racks full of the drives) and also because it can lay down 1TB a day MORE than the Seagate. Again, a slightly different workload than most of us.
For them, the extra cost and power of the higher spec Seagate aren't worth it.
In summary: essentially equal performance (go to SSD if you need speed); essentially equal cost; slig
Re:To save you the click through trouble... (Score:4, Insightful)
There were no mentions of reliability metrics
...which is the only reason I'd care to read such an article. I have a Synology 4-bay NAS filled with drives for home stuff. Although it's not critical data and I have the most important folders backed up to Amazon Glacier, several TB of data is tied up in rips of our CD and DVD collection. While I could re-rip everything, the first effort took weeks and I'd strongly prefer not to have to again.
So for my specific application, I don't care a lot about raw performance because everything's going through a 1Gb switch anyway. However, this thing runs 24/7 and I'd like a reasonably warm fuzzy feeling that I'm not likely to have two drives fail simultaneously. NAS drives (I've bought WD Red most recently) are specced for exactly that environment and have things like anti-vibration mechanisms to make them less likely to spontaneously explode. For the exact opposite, check out the Seagate Barracuda Data Sheet [seagate.com]. Scroll down to where they're rated for 2,400 power-on hours. In other words, they're built to survive a whopping 3 months in a NAS.
If you're buying something to stick in your gaming computer, read the performance specs. If you actually care about the data you're writing, the reliability numbers are way more interesting.
Re: (Score:2)
Re: (Score:2)
I'm buying 6TB drives now because two years from now I'll really wish I would've. They're not much more in absolute dollars than 4TB drives, and WD Reds had a small margin over Greens the week I bought them. As of today:
At those prices, to me the Red drives are definitely worth the narrow price difference, and 6TB is reasonably priced. The Seagate is an expensive travesty.
Re: (Score:2)
That's an attractive offer, but there's zero chance I'm going to be the first to try a new technology that Seagate's just now rolling out.
Re: (Score:2)
Yep, reliability for large capacity drive seems far more important. For best performance, use an SSD.
I just bought a couple of 6TB WD Red drives, since they claim they're specifically designed and tested for NAS devices. I was replacing a failing 3TB Seagate Barracuda drive and wanted to increase capacity at the same time. I've got a Synology 5-bay, and like you, have an extensive DVD/Blu-ray ripped collection. I technically have "backups" on the discs, but it would be a pain in the ass to re-rip everyt
Re:To save you the click through trouble... (Score:4, Informative)
I started way back when with a Drobo and a 1TB WD Black. When I wanted to grow that, 2TB drives were the sweet spot so I added a 2TB WD Green. Same for a year or so later, when I added a Seagate 3TB Barracuda. When I upgraded the Drobo to the DS412+, I threw in a WD Green 4TB.
Six months ago, the Seagate died. Tech support was decent and they replaced it under warranty with a refurb that had a 90 day warranty. At day 95, the replacement died. That's when I upped the ante and replaced it with a 6TB WD Red.
I keep watching SMART stats on that WD Black 1TB with 25,434 hours on it but it seems to be holding steady. The WD Greens aren't NAS drives but they're chugging away with nary a scary SMART data point. Seagate can go screw themselves.
Re: (Score:2)
Well, I'll be giving WD a try for my next set of drives. It's really hard to know with such small sets of sample data, and nothing equivalent to compare them against. I guess I'll just have to see how those drives are holding up in two years time!
Re: (Score:2)
For the exact opposite, check out the Seagate Barracuda Data Sheet [seagate.com]. Scroll down to where they're rated for 2,400 power-on hours. In other words, they're built to survive a whopping 3 months in a NAS.
If you're buying something to stick in your gaming computer, read the performance specs. If you actually care about the data you're writing, the reliability numbers are way more interesting.
Look at the AFR on the data sheet. It's less than 1%. So, obviously the MTBF is not 2400 hours. It's >875,000 hours. An MTBF of 2400 hours translates to an AFR of 97.4%, which is obviously not going to fare very well in a prototype lab, not to mention the marketplace.
Re: (Score:2)
Look at the AFR on the data sheet. It's less than 1%. So, obviously the MTBF is not 2400 hours. It's >875,000 hours.
There's a difference between powered hours and total expected lifetime. These drives have a two year warranty, so they're betting that it will last for at least 1,200 powered up hours per year, or about 3 hours a day. Also, MTBF does not mean that a single drive will last 875,000 hours (or 100 years), just that only one in hundred drives is expected to die per year.
In the same data sheet, they claim the drive is ideal for:
- Desktop or all-in-one PCs
- Home servers
- PC-based gaming systems
- Desktop RAID
-
Re: (Score:2)
I understand the relationship between MTBF and AFR. Of course, no one HDD will last 100 years, let alone on the average. However, think about it. How in the world would an HDD manufacturer come up with an expected 2400 lifetime? Qualification tests involve tests of 1000 drives for 1000 hours, from which a few drives will fail and the AFR and MTBF are derived. There is no way a 2400-lifetime squares with a 1% AFR. AFR numbers are clear. I'm not sure what "power-on hours" mean. It's obviously not MTBF
Re: (Score:2)
I'm not sure what "power-on hours" mean. It's obviously not MTBF. Is it max lifetime?
It's just that: how many hours it's designed to be turned on for. Compare to a lightbulb labeled to last for 1,000 hours but marketed as lasting for two years, with the fine print explaining "* when used for an hour per day". The expectation is that this particular drive will last for 24 calendar months, but that it won't be powered up and spinning the whole time. Imagine an office computer that gets turned off at night and weekends, and puts itself to sleep regularly throughout the day.
Given that this is a
Re: (Score:2)
6TB on one drive is bad (Score:2)
Would much rather have RAID6 of 5 2TB drives. Basically would rather have the most drives in the biggest RAID that would allow the lowest price per gigabyte...
I used to be a 7200rpm and HDD aficionado. (Score:2)
I purchased the first 7200rpm disk available to consumers nearly 20 years ago now. The WD Expert, 18gb if I recall.
http://www.prnewswire.com/news... [prnewswire.com]
I've always hated the performance of disks, big enthusiast primarily because I knew it
was the biggest bottleneck by far.
Fast forward to today and I am utterly bamboozled why people continue to purchase the bastard things. I detest them. They run hotter, cost more, are slightly more likely to fail, are noisier and the performance difference is utterly neglig
Re:"NAS" hard drives? (Score:4, Informative)
6TB isn't ready for the serious archives, who, by my own subjective definition, only purchases drives warranted for 5 years. It's still $160 or so for a 4TB like that.
TL;DR: Once you go WD Black you never go back.
Personal versus "industrial" approaches (Score:4, Insightful)
A ten year old tape you pull out of a box is going to work apart from a tiny fraction of a percentage of the time. A drive - not so likely since the spindle lubricant doesn't last forever and polished surfaces stick via diffusion. A twenty year old tape should have been transcribed years ago but is going to work unless it has got hot or damp in storage. A thirty year old tape is probably brittle and needs to be read with care, but I've sent a couple of dozen off to be transcribed. It was seismic data so file formats that could handle a few bits missing here or there, and errors outside the file headers have little impact due to "stacking" multiple datasets that overlap. However those reels from the early 1980s and late 1970s preserved effectively all the data put on them despite less than ideal storage (a shed in a humid subtropical climate).
Hard drives are not designed to last for a decade in a box. A decade powered up is ironicly likely to result in less dead drives than powered off on a shelf. Tapes don't have to deal with high speeds and are instead designed to last. They die from the substrate getting brittle over decades, the oxide peeling off the tape over decades and magnetised zones on one section of tape magnetising an area on the next loop of tape, once again over decades.
All that said, if you only have 6TB or so to keep, and you don't want to go for a pile of Blueray disks, getting a couple of drives every few years (3? 5? 7?) is a lot more sane than mucking about with tapes.
Re: (Score:3)
Which is why you do two tapes (or two external drives, blueray whatever) for whatever you don't want to lose.
There is an enormous second hand market and you transcribe to something new before that market dries up and you can't get something that can read the format any more. If you miss that boat then you send it to someone who can read it (see my bit about reels fro
Re: (Score:3)
Re: (Score:2)
Yes, they are positioned between basic desktop/laptop drives and enterprise-grade drives. As I understand it, the differences are mostly mechanical.
They are designed to be more rugged than basic desktop/laptop drives, and expected to be run at higher duty cycles. General use desktop/laptop drives are NOT intended for high duty cycles. NAS drives ARE. They are mechanically better.
But they lack specific mechanical features of enterprise drives that are meant to deal with vibrational issues related to having a
Re: (Score:2)
I thought there was also some difference in the firmware, IE being tuned with the expectation of running in a RAID.
Re: (Score:3)
They are mechanically better.
Can you provide a citation for that?
But they lack specific mechanical features of enterprise drives that are meant to deal with vibrational issues related to having a large number of drives in a single enclosure.
Wouldn't that make "enterprise" drives more reliable? Except, actual data [backblaze.com] shows that they are NOT more reliable. So maybe "enterprise drive" is just a BS marketing term to separate fools from their money, and that is why everyone that has actually looked at the facts, such as Google, Facebook, etc. doesn't waste their money on them.
Re: (Score:2)
Re: "NAS" hard drives? (Score:2)
They're much better. I've got a dozen or so of the 4TB Hitachis [amazon.com] now and I'm replacing all of my non-'NAS' drives with them.
The 'NAS' range appears to be the old-fashioned quality drives in modern packaging. I have regular Deskstars in 2 & 3TB configurations and they are really, really, slow drives to back storage, even with SSD caches in front of them. I plan to buy the 'NAS'-labeled drives from now on. The non-NAS drives only seem to be acceptable for long sequential access. It's nice to have slow ch
Re:"NAS" hard drives? (Score:5, Insightful)
I've seen in general, three lines of HDDs. Basic desktop/laptop drives, premium desktop/laptop, and enterprise grade drives which are designed to all wind up at the same firmware level to minimize issues when in RAID controllers.
However, a "NAS" hard drive? Is this something a step down from enterprise drives, but designed for a device like a Drobo, or some other solution that really doesn't care about background drives, uses RAID 5 or 6, and expects drives to blow out over time?
Are the Red drives designed to be paired or run in RAID arrays specifically, as opposed to the Green line that is made for power savings?
I always thought that the NAS/RAID drives allowed Time Limited Error Recovery [wikipedia.org] to be specified, which would prevent RAID controllers from interpreting a long error recovery interval as a drive timeout and erroring out that drive and removing it from the array. The NAS and Enterprise drives do allow this option to be set. [custhelp.com]
Re: (Score:2)
Well, so do HGST and Toshiba desktop drives, so the distinction is not quite as clear cut...
Do you have a reference for this? This information doesn't seem trivial to find, the closest I can find is "The Deskstar NAS also offers configurable advanced error recovery control to fine-tune RAID performance." in a review of the HGST 4TB Deskstar NAS HDD [storagereview.com] and no such claim in the review of their non-NAS drives. Do you have a reference showing that HGST supports TLER/ERC/CCTL across their desktop (non-NAS) drive line? I don't think Toshiba has a NAS drive though I'm not very familiar with their product l
Re:"NAS" hard drives? (Score:5, Informative)
Are the Red drives designed to be paired or run in RAID arrays specifically, as opposed to the Green line that is made for power savings?
Pretty much yes. The Red have better vibration tolerance, and the firmware is tweaked to fit a NAS workload better. For example, a Green will park the head as quickly as it can which for always-on machines can lead to a Green disk reaching its "Load/Unload Cycle" tolerance in months and die prematurely. The Red will not do this.
There's also a difference in how they handle unreadable sectors and such errors which makes the Red play nicer with hardware RAID controllers. An unrecoverable read error in a Green can cause the whole array to go down.
Re:"NAS" hard drives? (Score:5, Informative)
Of course, if you're running RAID, the best thing is to fail the read quickly and rebuild the sector from parity.
Re: (Score:3, Insightful)
If you can't figure out what he meant from the context then I think you might want to re-evaluate who the worthless fuck is.
Re: (Score:2)
1. I suspect even the author couldn't tell you whether it's .04 to .07 cents or dollars [verizonmath.com] per GiB.
2. By my math it's $279/6144=$0.05 to $479/6144=$0.08 per GiB, not $0.04 to $0.07.
3. Why are we using GiB when hard drive capacities are expressed in GB/TB?
Re: (Score:2)
Marketing weasels.
Re: (Score:2)
Well his own username does indicate that he's scum....
Re: (Score:2)
Re: (Score:3)
Re:What? (Score:4, Funny)
Hey, the S looked like it was crossed out, OK?
Bert
Re: (Score:2)
Re:HDD Advantage (Score:5, Informative)
Having experienced SSD failures.. NO you cant read from them. SSD drives do a catastrophic failure, you do not get a chance to read from them before full failure, they just do a complete fail and all data is gone forever.
Re: (Score:2)
I think that depends on the nature of the failure. The flash media itself will fail as described, but that fact will be almost completely hidden by a decent drive controller with a storage reserve - by it's nature the failure is easy to recover from, at least until such time as a sizable percentage of the drive capacity has so failed.
On the other hand it sounds like the SSD controllers tend to be less reliable than on a HDD, and if the controller goes you get an immediate catastrophic failure. It doesn't m
Re:HDD Advantage (Score:4, Interesting)
BINGO
The underlying issues with flash can be and are successfully hidden by the controllers in modern SSDs for most workloads (very heavy write loads can be problematic) but that hiding comes at a price. The firmware in a SSD is far more complex than an a HDD and so for a given level of engineering effort it will be less reliable. In particular i've noticed corruption after unclean shutdown to a far greater extent on SSDs than HDDs.
Re: (Score:2)
Re:HDD Advantage (Score:5, Interesting)
Once the electrons are out of the gate, the data is -gone-. No amount of recovery is going to do the job, ever.
This is my biggest concern with SSDs. Yes, they can have a longer MTBF, but when they go, they take your data with it, making backups more imperative.
The ironic thing? Since SSDs make the need for backups that much more urgent [1] We have far fewer tools for backup than we did on PCs 20 years ago (when an average user could get a desktop tape drive, a ZIP drive, removable SCSI hard disk, or other media.) For non-enterprise backups, we have external hard disks, USB flash drives, and offsite file servers [2]. Even optical drives are becoming uncommon. External hard disks and USB flash drives are not archival media. They -might- hold their data, but are not warrantied for it.
It would be nice if some company could make an appliance that did a disk-to-disk-to-removable-media appliance. The backup program would copy data to the device, and data would stay on a set of RAID protected HDDs, as well as eventually copied to removable media [3]. A bare metal restore would be easy -- if the appliance is connected via USB, have it present a DVD-ROM with the OS or recovery software. If on a LAN, have a USB flash drive or image that would get a machine booted enough to find the appliance and start a restore.
[1]: With HDDs, a recovery from a format isn't too difficult. SSDs usually follow up a format with a TRIM command, zeroing (or more exactly, writing 1s) to all the blocks, either right then, or as the drive feels like it. "Unformatting" a SSD is pretty much impossible with a modern OS that does proper TRIM commands. Add a decently smart encryption system like BitLocker that zeroes out the sectors with master volume keys multiple times, and it can almost be assured that a delete or a format results in data forever gone.
[2]: Cloud storage seems like a working idea, but it can take a good while to fetch lost documents and rebuild the entire OS and machine. With a local backup solution, most backup programs offer a simple bare-metal restore, no Internet access needed. There is also the fact that a machine needs to have the OS, updates, and the cloud provider's software loaded and logged in before a restore can happen. Having the OS local means a complete bare metal restore is a "press 'restore' and walk off" action.
[3]: Tape comes to mind. The main advantage of tape (or offline media in general) is that some hacker who gets access to the SAN controller can't just purge all media with a single command. A lot of companies have excellent replication of SAN data, but that replication will happily replicate the "delete everything, including all snapshots" as well. Plus, tapes can be physically set read-only where only a reflash of the tape drive could allow the cartridge to be written to. I wish someone could make a consumer level tape drive, perhaps using a SSD as a buffer to prevent shoe-shining. There is a Thunderbolt based tape drive for Macs by mTape for $3699. If someone made a product like this (but a price more palatable to consumers) that could tolerate USB 3 (or maybe even USB 2), and work well under Windows, Linux, and other operating systems, they might have a best seller. Especially with the fact that intruders now have moved from just accessing data to actively modifying and destroying it, so backups are even more crucial than they were before this year.
In fact, I'd say that with the ease data is permanently destroyed, a consumer level backup appliance might be quite a seller.
Re: (Score:2)
Re: (Score:2)
Samsung's 850 Pros have a 10-year warranty, although they are still quite expensive.
Also techreport.com has been running an endurance test [techreport.com], and a couple of the drives have reached 1.5 petabytes of writes without failing. I think they all lasted well beyond the manufacturers' expected write limits.
Basically, they've reached the point now that the average consumer can't wear out a drive.
Re: (Score:3)
And you'd trust your data to a first-gen drive technology? Backups are great and all, but it's still a hassle. I've been burned by enough cutting edge hardware already, I'll let the IT departments deal with the teething pains. I'll be waiting for at least v1.1, maybe 2.1, before I'm tempted by a marginal up-front cost reduction.
Re: (Score:2)
I'd love to know how they can be "high reliability" drives when they're new tech and haven't been tested out in the real world for an extensive period of time. "High reliability" is only something you can demonstrate retroactively, or with a proven technology.
Re: (Score:2)
The truth is they're all at the bottom of the barrel. Nothing is worse than X except for Y. Now Y is really crap. Only Z is worse than Y. Never Use Z. The only thing worse than Z is X. For every major brand out there you will find glowing reviews and horrific failure reports in about equal amounts. They all have good runs and bad runs. Occasionally they will have a particular model that is nothing but fail.
Re: (Score:2)
I beg to differ.
Anecdote: I have a stack of 4 Seagate ST325082A 250GB PATA drives I bought in 2006 with the specific intention of building a RAID. That RAID is still running, almost uninterrupted since November of that year (breaking for fan replacements and the odd power cut), with no failures.
Re: (Score:2)
Drives that old are pretty irrelevant in a discussion about multi-terrabyte drives. While I do have a single one of those notorious 1.5TB Seagates, all of it's siblings died a long time ago.
Seagate has done quite a bit lately to earn it's bad reputation.
New Seagates not like old (Score:2)