Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

6 Terabyte Hard Drive Round-Up: WD Red, WD Green and Seagate Enterprise 6TB 190

MojoKid writes The hard drive market has become a lot less sexy in the past few years thanks to SSDs. What we used to consider "fast" for a hard drive is relatively slow compared to even the cheapest of today's solid state drives. But there are two areas where hard drives still rule the roost, and that's overall capacity and cost per gigabyte. Since most of us still need a hard drive for bulk storage, the question naturally becomes, "how big of a drive do you need?" For a while, 4TB drives were the top end of what was available in the market but recently Seagate, HGST, and Western Digital announced breakthroughs in areal density and other technologies, that enabled the advent of the 6 Terabyte hard drive. This round-up looks at three offerings in the market currently, with a WD Red 6TB drive, WD Green and a Seagate 6TB Enterprise class model. Though the WD drives only sport a 5400RPM spindle speed, due to their increased areal density of 1TB platters, they're still able to put up respectable performance. Though the Seagate Enterprise Capacity 6TB (also known as the Constellation ES series) drive offers the best performance at 7200 RPM, it comes at nearly a $200 price premium. Still, at anywhere from .04 to .07 per GiB, you can't beat the bulk storage value of these new high capacity 6TB HDDs.
This discussion has been archived. No new comments can be posted.

6 Terabyte Hard Drive Round-Up: WD Red, WD Green and Seagate Enterprise 6TB

Comments Filter:
  • by xxxJonBoyxxx ( 565205 ) on Monday December 29, 2014 @03:20PM (#48691251)

    Awfully long summary to say "you can haz 6TB HD"

  • by Hadlock ( 143607 ) on Monday December 29, 2014 @03:21PM (#48691261) Homepage Journal

    Is anyone with significant amounts of data not caching their frequently accessed data on SSD? Rotational is still about 8x cheaper than SSD these days, but the days of rotational speed for cold data are numbered. Storage is easily abstracted so it's not a legacy concern. A lot of shops I know have already invested in a complete switchover to full-SSD (we're talking racks of SSD) with tape backup.
     
    Even my home file server uses two tiny second gen 64gb SSDs for read/write caching for ~20TB of data. I just buy the cheapest, biggest rotational drive whenever I start running out of room. When the price on those new Seagate 8TB drives (currently $230) drops to under $150 I will probably start swapping out my oldest 2TB drives to avoid having to upgrade the case in this decade.

    • Is anyone with significant amounts of data not caching their frequently accessed data on SSD?

      *looks around*
      *sheepishly half-raises hand*

      • by Anonymous Coward on Monday December 29, 2014 @03:50PM (#48691459)

        OK, I have a 5TB RAID array 50% full of music and a 3TB (soon to be upgraded to 4) full of videos.

        These drives run quite fast enough for me to stream their contents - why would I want to cache them onto an SSD?

        So I'm raising my hand but not sheepishly.

        • I cache to an SSD to speed up writes to an UnRaid array that uses consumer-grade drives. It makes a big difference as long as you can keep your writes as small as the SSD. There's no need to cache even 1080p high-bitrate files for reads.
        • by lgw ( 121541 )

          Similar here. All my "media" is on spinning disk, and it's entirely fit for the purpose. I use WD enterprise drives just to reduce the chance of an annoying failure (they're overpriced, really, but I freaking hate drive failures).

          Sure, boot drive, personal stuff, home software projects, anything but music and videos, goes on SSD, but that's maybe 5% of my storage.

    • by dagamer34 ( 1012833 ) on Monday December 29, 2014 @03:37PM (#48691369)
      On a 4-bay NAS box, there aren't enough slots to have a SSD acting as a cache unless you want to give up one of your very valuable bays.
      • by Lumpy ( 12016 ) on Monday December 29, 2014 @03:48PM (#48691449) Homepage

        Replace bay 1 with a SATA board that can hold 4 SSD drive cards. It's what I did. OS and cache in bay one and 3 bays for 3 6TB drives. works great.

        http://www.amazon.com/SATA-Dua... [amazon.com]

        Dual port version. I found a 4 port version and have it stuffed with 4 128gb SSD drives. works great.

        • by afidel ( 530433 )

          Ugh, RAID5 with 7k drives, that's just asking for data loss.

          • by ls671 ( 1122017 )

            is your recommendation valid for RAID 1 as well? I am just curious...

            -Thanks,

            • by afidel ( 530433 )

              RAID1/0 is fine if your upper level can do parity checks, but if you can't rely on an upper layer than RAID6 is best. Of course folks looking out a bit are saying that even RAID6 or similar dual parity schemes will become insufficient and so there's intense interest in newer coding schemes like rateless erasure codes, but I'm not sure those will ever scale down to the SOHO level other than through the use of cloud services. At enterprise scales I'm using RAID5 raidlets with advanced layouts that allow for e

        • Neat. Thanks for posting that link!
    • Even for home-based use, these big HDDs are increasingly being relegated to little more than mass media storage (oftentimes NAS-based), while SSDs are taking over everything else. Caching or not, rotational speeds (and the seek times they affect) end up being non-factors for a home user when all the drives are used to do is deliver video or audio content, particularly so if they're connecting to it over a LAN, since they'll in many cases spend orders of magnitude more (yet still not much) time buffering the

      • Even for home-based use, these big HDDs are increasingly being relegated to little more than mass media storage (oftentimes NAS-based), while SSDs are taking over everything else.

        [citation needed]

        I don't know what universe you live in, but unless you're talking about laptops/mobile, very few mass-market systems are being shipped with only an SSD. SSDs won't take over for many many years, unless there's a big change in how fast they catch up on price per GB. Mass market systems will continue to ship with one drive, and that drive will be a spinner for years to come. People into computers will certainly continue to ADD an SSD, but we're a small minority.

    • Depends on the context. Industry-wide, everyone either is or soon will be. In individual smaller setups, there are...complications.

      Most notably, Windows doesn't support it very well. Yeah, you can manually 'cache' data by installing application X on the SSD and storing the porn torrents on the HDD; but that gets to be a pain in the ass, quickly, for everything except the 'SSD large enough for all programs, HDD for media library' arrangement. From time to time a vendor will bodge something on(Intel's 'Sma
    • by hamjudo ( 64140 )
      If your data is valuable, you will need to mirror the drives or use RAID. So one limitation is how quickly you can add a drive to your mirror system.

      It would take 11 hours to fully mirror from one 6 TByte WD drive to another, if your system can actually manage to sustain 138Mbytes per second as shown on page 5 of the article. Obviously, the transfer will be slower, if the data is actually used for something.

      If a disk dies, at best you are looking at half a day before the system is fully redundant agai
      • by tlhIngan ( 30335 )

        If your data is valuable, you will need to mirror the drives or use RAID. So one limitation is how quickly you can add a drive to your mirror system.

        It would take 11 hours to fully mirror from one 6 TByte WD drive to another, if your system can actually manage to sustain 138Mbytes per second as shown on page 5 of the article. Obviously, the transfer will be slower, if the data is actually used for something.

        If a disk dies, at best you are looking at half a day before the system is fully redu

        • by jedidiah ( 1196 )

          > RAID5 is great and all, but once a hard drive fails and you go non-redundant, waiting for the array to rebuild and hoping no other drive goes bad in the meantime is quite stressful.

          Not if you have more than one copy.

          RAID is no replacement for backups.

      • by sjames ( 1099 )

        That's why for larger systems you should use multiply redundant arrays. For example, RAID6 or 3 way mirroring. That way you can cover the increasingly probable case of losing a disk while the re-construction is in progress. It also becomes increasingly important to use drives from different batches and preferably different ages.

        It's also helpful to have spares on-hand. I would like to see a concept of warm spares where the designated spares do not get powered except for periodic testing and when actually re

      • >If your data is valuable, you will need to mirror the drives or use RAID

        I'm using UnRaid and backing everything up to a second server that's kept offline and off-site so I can't have my collection destroyed by malware, theft, or a fire. It was a bit costly, but my data is secure enough,.
      • by mlts ( 1038732 )

        With how slow drives are, relative to their capacity, RAID-6 or RAID-Z2 are a must, not just for handling a disk failure during the time where the array is degraded and rebuilding from a hot spare, but for finding and fixing bit rot. Bit rot is not related to parity checking, and ideally, should be looked for at the filesystem level.

    • by sjames ( 1099 )

      There are a number of workloads where caching is not so useful. For example, video conversion or 'big data' analysis where you are streaming the inputs. At that point, an SSD is more of an intermediate buffer than it is a cache (so only helpful for writing). If your use pattern streams more data out than the size of the SSD, then it's only getting in the way.

      In a file server, unless you are using multiple gigE or faster interfaces, having plenty of RAM will make a much bigger difference than SSDs will.

    • by ihtoit ( 3393327 )

      do hybrid drives and custom NAS boxes count?

    • by ls671 ( 1122017 )

      > Even my home file server uses two tiny second gen 64gb SSDs for read/write caching for ~20TB of data.

      Did you configure this manually or just used something off the shelf? What is setup to accomplish this?

      -Thanks,

      • by Hadlock ( 143607 )

        I'm using Windows Hyper-V Server 2012 R2 (that's HYPER-V SERVER, not "Server", it's free) and then a bunch of command line commands. Do a google search for "ssd tiering write-back cache". Works great on my haswell era home VM lab. 6 rotational 2TB hard drives and 2x4TB hard drives + 2 64GB SSDs I got cheap from a buddy.

        Technically you could do this in Windows 8 if it weren't for artificial limitations. Clever dll usage can get it to work but it's best to just use Hyper-V Server 2012 R2 which is free

    • by Greyfox ( 87712 )
      Most of my data is infrequently-accessed video files. I edit them down and upload them to youtube but I keep the raw video around for future projects. A 4TB rotational drive should do me for a few years worth of videos, but I'm still tempted to set up a storage server on my network so I can play around with Hadoop's hdfs.
    • by dbIII ( 701233 )

      Is anyone with significant amounts of data not caching their frequently accessed data on SSD?

      Yes. Memory is even better, but of course after a point it gets stupidly expensive compared with an SSD, so it depends on the volume of frequently accessed data. Even swap/cache/l2arc on SSD is still a vast amount slower than caching in memory.

    • Is anyone with significant amounts of data not caching their frequently accessed data on SSD?

      Poor people who can't afford an SSD? Being mostly employed, middle class people here, or talking about business instead of home use, you guys still seem to forget that SSDs are still the Lexus' of the HD world (with PCIx ssds being the ferraris).
      I can barely meet my storage needs, so on the rare occasion I have $100-200 to spend on drives (maybe once per year), I have to add as much space as I can. Already have 10 different drives between my tower and a 4-bay NAS I got lucky and found in the trash, 250GBx

  • by Anonymous Coward on Monday December 29, 2014 @03:29PM (#48691309)

    You'd be nuts to trust your porn stash to a 6TB consumer drive right now. Buy two 4TB drives, and back that stuff up. Give the 6TBs a year or so to see if there are any reliability issues with these capacities, and for the price to drop a bit.

    • by sjames ( 1099 )

      That's why I celebrate the arrival of the 6 TB drives. They really brought the price of the 4's down.

  • by Mysticalfruit ( 533341 ) on Monday December 29, 2014 @03:34PM (#48691351) Homepage Journal
    I don't build a machine these days that doesn't have mirrored hard drives. You realistically can't backup 6TB worth of data, so barring some horrible FS failure (which is rare these days in Linux land) your best bet is RAID1.
    • by jedidiah ( 1196 )

      > You realistically can't backup 6TB worth of data

      Sure you can. Just get another drive. Redundancy and backup strategies haven't changed just because drives are bigger. If anything, you have a bit of an advantage now as overall drive prices are lower (even on the high end).

      Thanks to Seagate, I have tested this very procedure several times over the last year.

      • We've now conflated two important distinctions into a single subject here. Functional resilience and long term data integrity.
        I solve the long term data integrity problem by doing nightly snapshot delta's of my whole machine and my wife's machine (to a rasp pi with an external drive at a buddies house). Granted that's a single point of failure, but it's out of house in case my house {burns down, get's robbed, etc}

        However, that doesn't fix the near term issue of me busily working away on a project when boo
        • by jedidiah ( 1196 )

          Recreating my machine from install media is really not that gruesome of a prospect. Then again, I don't run the kind of OS that makes a naieve sort of backup of one's user files a problematic nightmare requiring special arcane tools to deal with.

          For the small stuff, I would rather use extra SATA ports (if I have any) for load balancing IO.

          It's the mountains of multimedia data accumulated over 20+ years that worries me. Now rebuilding that from the original media would take awhile.

          Backing up 6TB is no proble

    • Re:Buy two... (Score:4, Interesting)

      by CastrTroy ( 595695 ) on Monday December 29, 2014 @03:52PM (#48691473)
      If you want to avoid problems with FS failures and accidental deletions, then you can go without RAID and just sync the discs every night. This is what I do on my home desktop and it works just fine. At worst, I'll lose a day's worth of data, which wouldn't be the end of the world. I think that 3 drives with 2 in mirrored RAID and 1 running a nightly backup would be ideal. You could lose a drive and not lose any data, and any kind of file system errors or accidental deletions could also be easily dealt with.
      • by mcrbids ( 148650 )

        ... or you could set up ZFS with a mirrored vdev and keep snapshots. All the benefits of RAID1, combined with all the benefits of keeping any number of sync'ed disks laying around. If you have many disks, go with RAIDZ and get the reliability of RAID5 too.

        If you store lots of data, once you ZFS you'll never want to go back.

    • by ledow ( 319597 )

      Take that second drive.
      Put it in a USB enclosure.
      Run a backup once a week.

      Much less wear-and-tear on the drives. No big deal if something drops in your computer and shorts the 12V line, or you get water in it, or something else happens to the computer / SATA itself.

      Also, you can then even do one full and multiple differential backups assuming you're not jamming the drive to capacity (handy if you suddenly discover that the thing you did last week was stupid and has corrupted your older data).

      Live RAID is n

    • by afidel ( 530433 )

      You realistically can't backup 6TB worth of data

      Sure you can, we backup over 10x that every weekend.

    • by Jahoda ( 2715225 )
      Repeat after me: RAID is not a backup.
    • by Eevee ( 535658 )

      backup

      and

      RAID1

      Please don't take this the wrong way, but you don't know what you're talking about. RAID is not a backup, and backup is not RAID.

      RAID is keeping going with a hard drive failure. Backup is where you can recover any file in your backup time frame. If in a RAID configuration you delete a file and suddenly realize three months later that you really should have kept it, you're out of luck. If your OS decides to crap garbage all over your disk, RAID will faithfully mirror that garbage for you. It

    • by radish ( 98371 )

      My 5-ish TB of data over at Crashplan begs to differ (and yes, I have a local copy as well).

      Mirrored drives are not a good idea for data protection - for one thing an accidental delete (or overwrite, or ransomware, or whatever) will take your data out completely and instantly. Much better to do incremental backups at the file level, so you can restore deleted or damaged files from whenever you want in their history. Even if you don't want to pay for the cloud service, the crashplan software will do this ver

  • Fastest: Seagate.
    Best Warranty: Seagate.
    Best Cache: WD Red....or the Seagate...the article conflicts between the first two pages.
    Cheapest: WD Green.

    Seagate notables: Full drive encryption available at a firmware level. AF and Legacy disks are separate models.
    WD Red notables: 5400RPM spindle speed.
    WD Green notables: None - nothing distinguishable from the Red drive, except a shorter warranty.

    Sandra Benchmark results:

    Seagate: 167W/168R.
    WD Red: 138W/138R.
    WD Green: 133W/133R.

    Atto results are shown on a messy graph with no clear numbers, but Seagate wins that benchmark as well (albeit with a closer delta).

    HD Tune Pro results basically reflect the transfer rates from above. Seek times for the Seagate are 11ms for both write and read, with the WD Red having a 16/17 set of scores and the WD Green being less than an integer higher. Burst rates are again better on the Seagate (276R/304W), with the WD Green being 217/220 and the Red being 217/218.

    Crystal mark, basically the same numbers.

    Futuremark, prettier graphs with wonderful titles like "video editing" and "importing pictures", with the results a closer race, each drive having its own task at which it wins (even the green). Not much different from the 3TB numbers, and not that much different from each other.

    There were no mentions of reliability metrics; presumably none of the disks failed during benchmarking. Consult your usual biases and experience regarding which drive is likely to fail or not - this was strictly a benchmark review, and shockingly, the enterprise-grade drive with the highest rotational speed and biggest cache that costs the most money got the best score.

    • Re: (Score:3, Interesting)

      by Anonymous Coward

      There are some useful bits in the blog post by Backblaze [backblaze.com], as they care a lot about making a good choice between the two 6TB drives.

      • Backblaze tested the slower and cheaper STBD6000100, not the ST6000NM0024.

        For their tests they note that the WD Red uses slightly less energy (which is important to them, when they have racks full of the drives) and also because it can lay down 1TB a day MORE than the Seagate. Again, a slightly different workload than most of us.

        For them, the extra cost and power of the higher spec Seagate aren't worth it.

        In summary: essentially equal performance (go to SSD if you need speed); essentially equal cost; slig

    • There were no mentions of reliability metrics

      ...which is the only reason I'd care to read such an article. I have a Synology 4-bay NAS filled with drives for home stuff. Although it's not critical data and I have the most important folders backed up to Amazon Glacier, several TB of data is tied up in rips of our CD and DVD collection. While I could re-rip everything, the first effort took weeks and I'd strongly prefer not to have to again.

      So for my specific application, I don't care a lot about raw performance because everything's going through a 1Gb switch anyway. However, this thing runs 24/7 and I'd like a reasonably warm fuzzy feeling that I'm not likely to have two drives fail simultaneously. NAS drives (I've bought WD Red most recently) are specced for exactly that environment and have things like anti-vibration mechanisms to make them less likely to spontaneously explode. For the exact opposite, check out the Seagate Barracuda Data Sheet [seagate.com]. Scroll down to where they're rated for 2,400 power-on hours. In other words, they're built to survive a whopping 3 months in a NAS.

      If you're buying something to stick in your gaming computer, read the performance specs. If you actually care about the data you're writing, the reliability numbers are way more interesting.

      • Same here. I bought three 2TB drives for the NAS that I built just before the floods. I thought that I'd replace them with 4TB drives once the 4TB ones were as cheap as the 2TB ones that I'd bought. 4TB ones are still more expensive, but the 2TB disks are older than I'm comfortable with (and getting full - ZFS performance really starts to degrade at 90% capacity). The WD drives seem to be reasonably well regarded, but I don't know whether the Red ones are actually worth more to me than the Green. Most
        • I'm buying 6TB drives now because two years from now I'll really wish I would've. They're not much more in absolute dollars than 4TB drives, and WD Reds had a small margin over Greens the week I bought them. As of today:

          • 4TB WD Green: $140
          • 4TB Seagate "desktop HDD" (with the shitty 2,400 hour rating): $145
          • 4TB WD Red: $163
          • 6TB WD Red: $266

          At those prices, to me the Red drives are definitely worth the narrow price difference, and 6TB is reasonably priced. The Seagate is an expensive travesty.

      • Yep, reliability for large capacity drive seems far more important. For best performance, use an SSD.

        I just bought a couple of 6TB WD Red drives, since they claim they're specifically designed and tested for NAS devices. I was replacing a failing 3TB Seagate Barracuda drive and wanted to increase capacity at the same time. I've got a Synology 5-bay, and like you, have an extensive DVD/Blu-ray ripped collection. I technically have "backups" on the discs, but it would be a pain in the ass to re-rip everyt

        • I started way back when with a Drobo and a 1TB WD Black. When I wanted to grow that, 2TB drives were the sweet spot so I added a 2TB WD Green. Same for a year or so later, when I added a Seagate 3TB Barracuda. When I upgraded the Drobo to the DS412+, I threw in a WD Green 4TB.

          Six months ago, the Seagate died. Tech support was decent and they replaced it under warranty with a refurb that had a 90 day warranty. At day 95, the replacement died. That's when I upped the ante and replaced it with a 6TB WD Red.

          I keep watching SMART stats on that WD Black 1TB with 25,434 hours on it but it seems to be holding steady. The WD Greens aren't NAS drives but they're chugging away with nary a scary SMART data point. Seagate can go screw themselves.

          • Well, I'll be giving WD a try for my next set of drives. It's really hard to know with such small sets of sample data, and nothing equivalent to compare them against. I guess I'll just have to see how those drives are holding up in two years time!

      • For the exact opposite, check out the Seagate Barracuda Data Sheet [seagate.com]. Scroll down to where they're rated for 2,400 power-on hours. In other words, they're built to survive a whopping 3 months in a NAS.

        If you're buying something to stick in your gaming computer, read the performance specs. If you actually care about the data you're writing, the reliability numbers are way more interesting.

        Look at the AFR on the data sheet. It's less than 1%. So, obviously the MTBF is not 2400 hours. It's >875,000 hours. An MTBF of 2400 hours translates to an AFR of 97.4%, which is obviously not going to fare very well in a prototype lab, not to mention the marketplace.

        • Look at the AFR on the data sheet. It's less than 1%. So, obviously the MTBF is not 2400 hours. It's >875,000 hours.

          There's a difference between powered hours and total expected lifetime. These drives have a two year warranty, so they're betting that it will last for at least 1,200 powered up hours per year, or about 3 hours a day. Also, MTBF does not mean that a single drive will last 875,000 hours (or 100 years), just that only one in hundred drives is expected to die per year.

          In the same data sheet, they claim the drive is ideal for:

          - Desktop or all-in-one PCs
          - Home servers
          - PC-based gaming systems
          - Desktop RAID
          -

          • I understand the relationship between MTBF and AFR. Of course, no one HDD will last 100 years, let alone on the average. However, think about it. How in the world would an HDD manufacturer come up with an expected 2400 lifetime? Qualification tests involve tests of 1000 drives for 1000 hours, from which a few drives will fail and the AFR and MTBF are derived. There is no way a 2400-lifetime squares with a 1% AFR. AFR numbers are clear. I'm not sure what "power-on hours" mean. It's obviously not MTBF

            • I'm not sure what "power-on hours" mean. It's obviously not MTBF. Is it max lifetime?

              It's just that: how many hours it's designed to be turned on for. Compare to a lightbulb labeled to last for 1,000 hours but marketed as lasting for two years, with the fine print explaining "* when used for an hour per day". The expectation is that this particular drive will last for 24 calendar months, but that it won't be powered up and spinning the whole time. Imagine an office computer that gets turned off at night and weekends, and puts itself to sleep regularly throughout the day.

              Given that this is a

    • by jandrese ( 485 )
      Basically, the Seagate drive was $200 more expensive and about 20% faster than the WD drives. The WD Red and Green drives were basically identical.
  • The concept of storing 6TB on one hard drive just scares me after replacing so many dead drives. Hard drives go bad more often than any other part in general purpose computers.

    Would much rather have RAID6 of 5 2TB drives. Basically would rather have the most drives in the biggest RAID that would allow the lowest price per gigabyte...

  • I purchased the first 7200rpm disk available to consumers nearly 20 years ago now. The WD Expert, 18gb if I recall.
    http://www.prnewswire.com/news... [prnewswire.com]
    I've always hated the performance of disks, big enthusiast primarily because I knew it
    was the biggest bottleneck by far.

    Fast forward to today and I am utterly bamboozled why people continue to purchase the bastard things. I detest them. They run hotter, cost more, are slightly more likely to fail, are noisier and the performance difference is utterly neglig

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...