Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Data Storage IT Technology

Seagate Bulks Up With New 8 Terabyte 'Archive' Hard Drive 219

MojoKid writes Seagate's just-announced a new 'Archive' HDD series, one that offers capacities of 5TB, 6TB, and 8TB. That's right, 8 Terabytes of storage on a single drive and for only $260 at that. Back in 2007, Seagate was one of the first to release a hard drive based on perpendicular magnetic recording, a technology that was required to help us break past the roadblock of achieving more than 250GB per platter. Since then, PMR has evolved to allow the release of drives as large as 10TB, but to go beyond that, something new was needed. That "something new" is shingled magnetic recording. As its name suggests, SMR aligns drive tracks in a singled pattern, much like shingles on a roof. With this design, Seagate is able to cram much more storage into the same physical area. It should be noted that Seagate isn't the first out the door with an 8TB model, however, as HGST released one earlier this year. In lieu of a design like SMR, HGST decided to go the helium route, allowing it to pack more platters into a drive.
This discussion has been archived. No new comments can be posted.

Seagate Bulks Up With New 8 Terabyte 'Archive' Hard Drive

Comments Filter:
  • I am just about to build a FreeNAS or NAS4Free box. I was planning on running three 4TB drives to give me 8TB usable, but I'm probably better off with a pair of these. I'm mostly using the storage for TV recording, so the slower speed is fine. If the slower speed also means lower power, then it's a big plus.

    • Re:Just in time. (Score:5, Insightful)

      by TheGratefulNet ( 143330 ) on Saturday December 13, 2014 @11:59AM (#48589263)

      you are better off with generation-1 than generation-current.

      never trust the very leading edge. and, we're talking seagate, here; their enterprise drives are ok but I wouldn't touch them, these days, for consumer drives. no way!

      no way I'm trusting helium, either; since it escapes and makes the drive useless a few years down the line.

      • by Anonymous Coward on Saturday December 13, 2014 @12:05PM (#48589283)

        you are better off with generation-1 than generation-current.

        never trust the very leading edge. and, we're talking seagate, here; their enterprise drives are ok but I wouldn't touch them, these days, for consumer drives. no way!

        no way I'm trusting helium, either; since it escapes and makes the drive useless a few years down the line.

        But you'll be able to tell when that happens when your voice gets really squeaky.

      • Re: (Score:2, Interesting)

        by Anonymous Coward

        What? Seagate discs for consumers have been pretty much bullet proof according to what I've been able to find. I've got discs from them that are 15 years old that I scrapped for lack of capacity rather than failure and drives from them from many of the generations in between.
        HGST are the ones doing helium, not seagate BTW.

        • by Anonymous Coward on Saturday December 13, 2014 @01:03PM (#48589573)

          I got a Seagate 3TB in a USB enclosure a year or two back.

          Worked great for a year to a year and a half, then I started getting it randomly hanging. At first I assumed it was the usb interface going back, but upon removing the drive and directly plugging it into the system the same symptoms remained. Since it had been in an enclosure it hadn't supported SMART access to the drive. The SMART readings with the bare drive didn't show anything obvious, but actually reading from the drive would give read errors, and too many read/write errors over a certain period would cause the drive to hang, sometimes hanging the entire bus.

          Long story short, it turned out I wasn't the only one having this problem, it happened pretty commonly across that entire serial line of drives, and there was neither firmware fix, nor warranty support for them (The enclosures only gave a 1 year warranty despite the drives having a 3 year warranty tag printed on them. The only thing I can figure is they figured out the entire batch was bad, about how long they'd last, and shoved them in a bunch of USB cases where they didn't expect anybody to find out.)

          Having dealt with that drive, and reviews of them online, I'm going to be aversive to those, hitachi 3tb, and possibly WD 3tb for the forseeable future. Knock on wood, I haven't had ANY problems with 2 terabyte drives so far and given that another stepping of drives is coming out, we might see the later versions of the current-gen drives becoming mature enough to rely on for more than a year, which going off reviews doesn't seem statistically safe yet for this generation.

          • by PRMan ( 959735 )
            Is that why mine is showing no SMART errors even though it's completely failing? I was wondering.
        • Re:Just in time. (Score:5, Insightful)

          by jandjmh ( 66714 ) on Saturday December 13, 2014 @01:06PM (#48589591) Homepage

          Yes, I also have very old Seagate drives with capacities from about 40 to 300 or so gigabytes that work fine. I also have a 5 gallon pail full of dead 1 terabyte drives that are 2-4 years old. I do IT consulting (mostly for small business) and the failure rate on the 1 terabyte and up drives has been hideous. I have been hammering on all my customers to do full drive image backups regularly - and to replace the backup devices as soon as they are over two years old. I'm generally not a hard sell guy, but I am pushing this, because I don't want them to be able to say they weren't warned when I have to charge them $thousands to get going again after a drive fails.

          • Re:Just in time. (Score:5, Insightful)

            by beelsebob ( 529313 ) on Saturday December 13, 2014 @01:37PM (#48589779)

            You mean you got hit by the 7200.11 bug and didn't do any research into it to discover that it's a firmware issue with a simple fix?

            • Re:Just in time. (Score:5, Informative)

              by WaffleMonster ( 969671 ) on Saturday December 13, 2014 @03:11PM (#48590295)

              You mean you got hit by the 7200.11 bug and didn't do any research into it to discover that it's a firmware issue with a simple fix?

              Its simple to upgrade the firmware when you can still access the drive otherwise you have to jigger up a TTL level serial interface and send AT commands to unbrick the thing...lots of "fun".

            • by jedidiah ( 1196 )

              Broken is still broken. Shipping a broken product is perhaps tolerable for a GAME but it simply shouldn't be tolerated for hardware. If the end user has to "patch" a piece of hardware then it's still an engineering fail and Seagate deserves every bit of grief anyone gives them.

              Their 3TB drives in particular seem to implode at about 18 months.

            • Re:Just in time. (Score:4, Informative)

              by 0111 1110 ( 518466 ) on Sunday December 14, 2014 @03:38AM (#48592627)

              You mean you got hit by the 7200.11 bug and didn't do any research into it to discover that it's a firmware issue with a simple fix?

              So you bought the Seagate company line about that? Either you never owned one of those drives or you were one of the lucky few that was eventually helped by the firmware fix. Although why you would wait around for many months for the 'simple fix' when you could get a refurb replacement immediately I don't know.

              This is why a good PR firm is worth its weight in gold. It's okay to have a catastrophic production failure as long as you can retroactively convince the ones who didn't get burned that it was all just a big misunderstanding and was easily fixable with a simple firmware update. If only Hitachi had done so well with their infamous Deathstar drives.

              So you believed their propaganda. Go back to the Seagate forums from that time and I think you will see that the so called "firmware fix" only fixed a small percentage of the problems with those drives. There was another fix that helped some people (more than with the firmware update) that involved removing the pc board of the drive and hacking the hardware yourself. I believe a soldering iron may have been required in addition to a particular sort of cable. I can't remember exactly but it was not a fix that most people would be able to apply and often it didn't work anyway. I had a 1.5 TB 7200.11 that I had been keeping for ages to eventually buy the cable and apply the fix but by the time I got around to maybe doing it 1.5 TB was a very small drive and I didn't care so much about the lost data anymore.

              I had 6 7200.11s. Both 1 TB and 1.5 TB. Most failed in less than 6 months and then their replacements failed too. None of them work today. Not a single one. And your firmware fix could not be applied to any of my drives because it was not a firmware problem. At least with my drives. Yes a small percentage of 7200.11s did have firmware problems, but mostly it was a hardware unreliability problem. The click of death as well as drives that just refused to stay online for long. They'd just drop out. And all kinds of 'delayed write' errors etc. Those were not caused by poorly written firmware. They were 100% authentic hardware problems and Seagate shipped out countless new drives to replace the things on warranty which would seem like a rather expensive thing to do if all they had to do was update the firmware. But maybe you will say even seagate "didn't do any research" and was unaware of the "simple fix" you speak of.

              Despite your convenient assumption about lots of 7200.11 owners being unaware of the too little and far too late 'fix' of a firmware update that didn't even work for most owners, I suspect that most found out about it when their drives started failing. A simple google search for '7200.11' and 'clicking noise' would eventually have gotten hits for the so called 'fix'. Of course it took Seagate forever and a day to even come up with that. I don't think they have ever even admitted that there was any sort of problem with the drives and by the time they came up with your so called "simple fix" most owners had already been burned pretty badly by their decision to go with Seagate. Before my 7200.11 I had been a big fan of Seagate. Nearly all my drives were Seagates. Now I don't care what name is on the drive. They are all incredibly unreliable. I have better luck with their refurb replacements usually.

        • Re:Just in time. (Score:5, Informative)

          by Immerman ( 2627577 ) on Saturday December 13, 2014 @02:05PM (#48589911)

          Unfortunately there is a common trend in the commercial world where a once-quality brand decides to cash in on it's reputation and sell low-quality crap and "We're a quality brand" prices. No doubt it boosts profit margins dramatically, for a while, but means the world loses another quality brand, and a lot of customers get screwed over. And sometimes it's a graduated process where the high end enterprise/boutique products continue to maintain their quality to prop up the brand, while the quality of the normal products falls off a cliff.

          I haven't been following hard drives closely enough to be able to comment on Seagate's case, but I've seen it happen to far to many once-great brands to be even remotely surprised.

          • unfortunately, even voting with your wallet is out of the question these days since you only have a duopoly to choose from. i just hope ssds will soon catch up capacity-wise.

            • A duopoly? Has Toshiba collapsed as well without me noticing? Surely neither Seagate nor WD has gone under.

            • by jbolden ( 176878 )

              I don't think they will. As SSD replaces HDD for day to day tasks HDD is replacing tape for longer term archive. HDD are going to move to slower and bigger while SSD is going to have to balance faster with bigger. The result will be many years of HDD having higher capacity.

        • We have a bunch of Seagate SV35 drives in a backup server. They started to get kicked out of RAIDZ one by one. Some show actual bad sectors (and were replaced since warranty was not expired), but others worked OK when tested using MHDD and the seller refused to replace then under warranty.

          It turned out that those drives are so sensitive to vibration that dropping a coin on the PC case (with the drive secured in the drive bay) from a few cm height caused the drive to hang for about two seconds and emit a bee

      • Re:Just in time. (Score:5, Informative)

        by ShanghaiBill ( 739463 ) on Saturday December 13, 2014 @12:26PM (#48589379)

        their enterprise drives are ok but I wouldn't touch them, these days, for consumer drives. no way!

        There is no difference in reliability between "enterprise" and "consumer" drives. Those are purely marketing terms. The sole advantage of enterprise drives is a longer warranty. If you are bad at math, you might think that is a good deal.

        • Drives intended to go in RAID arrays have different firmware and handle errors differently.

          They may also get different testing. I worked for a telecom equipment vendor and there were specific drives that had been tested for behaviour under high/low temperatures, high/low humidity, vibration, etc.

          If you're a big enough company then drive manufacturers will actually work with you to resolve drive firmware issues and/or answer questions about specific behaviours on their enterprise drives.

          Lastly, at least in

        • by mysidia ( 191772 )

          There is no difference in reliability between "enterprise" and "consumer" drives. Those are purely marketing terms

          The statement you have made is an overly broad genralization.

          There are a multitude of differences between the average consumer drive and the average enterprise disk drive, which affect operational reliability of the drive in various scenarios.

          For a consumer drive; the reliability has to be measured as correct operation of a single disk drive in a consumer workstation.

          For an enterprise d

          • Re: (Score:2, Informative)

            Consumer disk drives cannot be substituted in while retaining the same level of reliability.

            Thanks for your unsupported and unsubstantiated opinion. All the actual data says otherwise. If "enterprise" drives were actually more reliable, you would see them used in datacenters by companies like Google, Facebook, Yahoo, etc. But all of these companies use "consumer" drives in their datacenters. So does everyone else that believes data over marketing.

            • Re:Just in time. (Score:5, Insightful)

              by mysidia ( 191772 ) on Saturday December 13, 2014 @05:17PM (#48590897)

              No. Go look at an upper mid-sized enterprise, and ask what kind of hardware they have running their Microsoft SQL Servers, their Exchange server, or their Oracle cluster.

              What Google, Facebook, and Yahoo are doing is not relevant at the enterprise level. These are super-colossal cloud-scale companies, that are 3 orders of magnitude larger than Enterprise computing, not ordinary enterprises.

              Enterprise hard drives are designed for Enterprise use, not Google or Facebook's cloud or HPC clusters.

              These massive companies also have their own custom hardware built at their disposal. They are not using RAID arrays like most enterprises are using, and they essentially have massive farms of workstations instead of servers running their computational workloads.

              At sufficient scale, you can achieve reliability from consumer disk drives for in-house applications, by designing your application around your components, BUT the major requirement is that you are in control of the application stack, so you can actually use the disk drives like you want --- and not have to stick them in a tightly-coupled RAID array.

              The consumer disk drives are not sufficiently unusable that you can't work around the limitations by having thousands of them in a cluster, with terabytes of cache spread over 5000 computers, and some smart application logic doing what ordinary RAID subsystems cannot.

              • And yet some of those companies have published individual drive data showing the exact reliability. I suggest doing some reading on Backblaze's blog before you claim some mythical reliability advantage for a harddrive in some strange mid-tier solution. You'll find that reliability figures don't change between enterprise and consumer grade stuff.

                Now while you're spitting out observations let's dig into that for a while. I'm building an SQL Server and I work for a large enterprise. Do I
                a) dedicate my valuable

        • There are differences in firmware though when you compare enterprise 7200rpm drives to desktop 7200rpm drives - error timeouts for example, and caching algorithms. You can tweak the drives to change the timeouts and recalibration times to make desktop drives behave better in arrays but they are _not_ otherwise identical. Also, although you can throw a SATA drive on a SAS controller (I have such a setup at home) throughput in an array is generally much better with SAS drives. At home I edit the timeouts on

      • you are better off with generation-1 than generation-current.

        I completely agree. I'm about to retire a rack of 1 TB drives in my NAS and replace them with three 4TB drives in a raid 5 array. The 4TB drives will had to be out a year before I started to trust them.

        Live on the bleeding edge with shit your not afraid to lose. Trust your important shit with well tested 2nd or 3rd generation technology.

        • Re:Just in time. (Score:4, Informative)

          by tabrisnet ( 722816 ) on Saturday December 13, 2014 @02:20PM (#48589991)

          Don't use RAID5 with drives over 1TB.

          a) a RAID5 rebuild takes many hours, b/c it involves reading the entire disc.
          b) drives from the same production batch tend to cluster failures.
          c) I recall reading that the uncorrectable read error rate tends towards the 2TB mark.

          That is, chances are very good that a single drive failure will become a 2-drive failure during a rebuild.

          RAID6 or nothing.

          • The RAID-5 is not set in stone. RAID-6 is an option that has not been ruled out, odds are I will go that route.

            I know about the drive failures in batchs like that. I've been bitten by it before. I usually buy drives from different sources weeks apart. That increases the odds that the drives will come from different batchs. I don't if that affects the reliability of the drives themselves but makes me feel better.

          • by Fweeky ( 41046 )

            I recall reading that the uncorrectable read error rate tends towards the 2TB mark.

            12.5TB, assuming the specified 1-in 10^14 bit uncorrectable-read-error rate specified for most consumer drives is accurate. I certainly don't see rates anywhere near that high with my consumer drives, but I could just be lucky.

            • by darkain ( 749283 )

              The question is though, what method are you using to test for these errors in the first place? How do you KNOW there has not been a read error occurring within the discs? This is a big reason why ZFS exists. http://en.wikipedia.org/wiki/D... [wikipedia.org]

      • Good, because these don't have Helium.

      • Their consumer drives have gone to absolute shit. I was buying them because they were marginally cheaper than the other choices. I ended up with a couple dozen running over the period of about a year. As each matured to about 1.5 years old, they started dying. Seagate reduced their warranty for consumer drives down to 1 year, so now they're all paperweights.

        I guess they're ok, if you want to build a computer that you only want to use for 1 year. Maybe building out a machine for someone you don't

      • by PRMan ( 959735 )
        I have just had a couple Seagate drives fail just outside of their 1-year warranties. I'm never buying Seagate again, no matter how cheap they were (and a 3TB was only $100 when I got it).
    • by swb ( 14022 )

      With 8 TB drive sizes I would think you would want double parity and some kind of hotspare. The rebuild times on that could be glacial.

      • Re:Just in time. (Score:4, Informative)

        by Voyager529 ( 1363959 ) <voyager529@yahoo. c o m> on Saturday December 13, 2014 @01:07PM (#48589595)

        Crow, listen to this guy. Assuming these things have 100MBytes/sec write speed, a simple RAID-1 will take over 22 hours to rebuild.

        If you want 8TB of usable space, get 4x4TB and RAIDz2 (i.e. RAID6) them. Even if it's disposable data, the data must be of sufficient use to justify a FreeNAS build over a simple external. It's worth your time to do it right.

        • by Vairon ( 17314 )

          The average read/write speed of this drive is 150MB/sec with a maximum sustained read rate of 190MB/sec. See http://www.seagate.com/files/w... [seagate.com]

          Assuming only the average read/write rate it would take 14 hours and 48 minutes to simultaneously read from one drive and write to another.
          8*1000*1000/150/60/60=14.81 hours

          • Yeah, assuming you're not doing anything at all with the array while it's rebuilding, and none of the sectors have been remapped causing seeks in the middle of those long reads/writes.

            To throw out one more piece of advice; RAID6 is useless without periodic media scans. You don't want to discover that one of your drives has bit errors while the array is rebuilding another failed drive. RAID6 can't correct a known-position error and an unknown-position error at the same time. raidz2 has checksums that sho

    • The slower speed can't get you lower power there, the drive is slow when re-writing because due to the tech used it has to do some copy/delete/write stuff very roughly similar to having to erase a whole block of flash to write a single logical 512 byte or 4096 byte sector.
      If you mostly store large stuff that doesn't get deleted or don't care about the possible reduction in write speed, it's still fine to get that drive. (good at recording TV stuff you intend to keep, not that good if you're continuously rec

    • Why do we need PMR or SMR, given that flash memory densities are catching up to these? From PMR, they should consider SSDs
  • by SensitiveMale ( 155605 ) on Saturday December 13, 2014 @12:08PM (#48589289)

    and then let's hear about how it's all anecdotal evidence.

    Then someone will bring out the backblaze survey.

    Then someone will say "They've never had a problem with Seagate, but WD sucks."

    Then someone will lament how IBM no longer makes drives. Then the deskstar stories will start.

    In other words, the same responses every time a hard drive story is posted.

    • by Pieroxy ( 222434 )

      It's not deskstar but deathstar I was told...

      As an anecdote, my first HDD to ever fry was a 8GB deskstar. I lost everything. Now I have backups and raid. Many failures later (at least 3) I've yet to lose a single bit I deemed important.

    • Then the deskstar stories will start.

      Hey, that's one of the classics! SonyBMG rootkit, removal of OtherOS, and the Deathstar hard drives. Two decades of ranting excellence!

    • Boy I was about to post a Backblaze survey concerning enterprise vs consumer drives. I am so glad I waited until I read your post.

  • . . . to rebuild your array when one of these puppies fails? :-( I realize it's progress, but I'm just thinking of the practical realities of using individual drives this big in a NAS.
  • I've just had two 5tb Seagates fail. Out of two.
  • Archive? (Score:4, Insightful)

    by dfn5 ( 524972 ) on Saturday December 13, 2014 @12:26PM (#48589381) Journal
    Why does this drive have the archive moniker? Is it any more reliable than a non-archive drive? The name suggests I can put data on it and shelve it for 20 years and come back with all the data still there. Is there any indication that might be the case? Somehow I doubt it.
    • by sshir ( 623215 )
      My guess, is that due to the way they write data onto platter, this drive is pretty much useless for random writes (even more so than a regular hard drive). It's good only for huge sequential writes i.e. just bulk storage. Knowing this, they, allegedly, added some long term reliability features and slapped "archival" moniker.
    • Re:Archive? (Score:5, Informative)

      by Phs2501 ( 559902 ) on Saturday December 13, 2014 @12:38PM (#48589455)
      From this article here [lwn.net] it appears that shingled hard drives are not completely random-access for writes. They will probably need some sort of flash-like translation layer to support normal file systems. (Or Seagate has provided that layer internally like SSDs do, in which case as a first-generation device it is probably buggy and will lose your data...)
      • Yep, I remember watching some Linux conference about upcoming hot new SMR tech (2 years ago?), and I think they said those drives are read-modify-write on every write, that is the price you have to pay for huge capacities.

        They are targeting long term storage, and will be useless for desktop/server use.

        • by sshir ( 623215 )
          Actually, as somebody pointed out, if you put any modern copy-on-write file system (xfs, btrfs etc.) on that puppy, SMR disk will work just like any other hard drive.
          • by butlerm ( 3112 )

            XFS is not a copy on write filesystem, by the way. ZFS and BTRFS should work definitely work better, but they might need some internal tweaks to make the best use of it.

        • by Vairon ( 17314 )

          The average read/write seek time of this drive is 12ms. That seems quite usable to me for multiple use cases.

          reference: http://www.seagate.com/www-con... [seagate.com]

    • by mc6809e ( 214243 )

      It looks to me like these drives write a large amount of data as a spiral of multiple tracks so that the platter must rotate many times to complete the write.

      That's fine for streaming data sequentially to the disk for long term storage.

      Random writes must be dog-slow, though.

    • Others replied mentioning it's because PMR is mostly useful for sequential writes, not random. That's true, and also the drive needs idle time between writes for garbage collection and remapping. It therefore fits the for daily backups, which are sequential and provide the drive time to garbage collect before it's used again.

      It's less suited to something like storing security footage, where is has to record 24/7. Unless of course the recording software is specifically designed for PMR drives and writes

  • by sshir ( 623215 ) on Saturday December 13, 2014 @12:28PM (#48589391)
    It makes 6TB WD Green way overpriced. Will be fun to watch price action on newegg.

    These drives are targeting more or less the same market. And judging by the number of complains, WD's 4 and 6TB drives are not much better in reliability department (although I might be wrong in that regard)

  • Now my backups can disappear because my Seagate "Archive" drive took a sh*t 2 years after I bought it.

    Seriously. I just went through a stack of 5 Seagate HDDs, from different customers, with a sledge hammer. They all died with S.M.A.R.T. failures.

    I wouldn't trust Seagate with my data unless I *wanted* it to self-destruct.

    • Comment removed based on user account deletion
  • Or is it like the current 8 and 10tb drives that only seem to exist at the fantastipotamus store?

  • by JoeyRox ( 2711699 ) on Saturday December 13, 2014 @01:04PM (#48589577)
    SMR drives are fine for I/O scenarios that don't overwrite data very often but they suffer from significant performance penalties for overwrites due to the read-modify-write/write-relocate operations required to modify a set of sectors within a shingled-encoded track. There are tricks to lessen the impact such as virtual sector remapping and background remapping but those can't avoid performance penalties in many scenarios.
    • by rtaylor ( 70602 )

      It's handy that modern filesystems are mostly copy-on-write anyway.

      • But the majority of users aren't running OS's that have filesystems with copy-on-write support, including Windows 7 and below with NTFS.
  • by dltaylor ( 7510 ) on Saturday December 13, 2014 @04:16PM (#48590573)

    Write Once Read Mostly

    Shingled media is almost useless for random access, since rewriting a logical block means relocating its entire "shingle" strip somewhere else., then, at some other time, garbage-collecting the entire region and relocating the still-in-use blocks. You definitely want to run these "noatime", to prevent thrashing directory blocks, and they should probably have a new filesystem designed for them.

    Some have tried tinkering with flash filesystems due to the "copy/invalidate/garbage collect" and the LBAs are gathered in some larger storage block in no particular order, and that storage block needs to be managed. Don't know if Seagate will tell us what the size of a erase block (a set of overlapping, concentric "shingles", which have to be collected as a group) really is, or if they'll even be a consistent size.

    If you're streaming from them, you may hit "garbage collect" long access times, and I don't know what proprietary commands and settings may be available, if any, to tell the drive "now is a good time to do housekeeping".

    As "archive media", shingled drives probably work OK, since that is a WROM application, but, personally, I would NOT use them on any existing file system.

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...