Seagate Bulks Up With New 8 Terabyte 'Archive' Hard Drive 219
MojoKid writes Seagate's just-announced a new 'Archive' HDD series, one that offers capacities of 5TB, 6TB, and 8TB. That's right, 8 Terabytes of storage on a single drive and for only $260 at that. Back in 2007, Seagate was one of the first to release a hard drive based on perpendicular magnetic recording, a technology that was required to help us break past the roadblock of achieving more than 250GB per platter. Since then, PMR has evolved to allow the release of drives as large as 10TB, but to go beyond that, something new was needed. That "something new" is shingled magnetic recording. As its name suggests, SMR aligns drive tracks in a singled pattern, much like shingles on a roof. With this design, Seagate is able to cram much more storage into the same physical area. It should be noted that Seagate isn't the first out the door with an 8TB model, however, as HGST released one earlier this year. In lieu of a design like SMR, HGST decided to go the helium route, allowing it to pack more platters into a drive.
Just in time. (Score:2)
I am just about to build a FreeNAS or NAS4Free box. I was planning on running three 4TB drives to give me 8TB usable, but I'm probably better off with a pair of these. I'm mostly using the storage for TV recording, so the slower speed is fine. If the slower speed also means lower power, then it's a big plus.
Re:Just in time. (Score:5, Insightful)
you are better off with generation-1 than generation-current.
never trust the very leading edge. and, we're talking seagate, here; their enterprise drives are ok but I wouldn't touch them, these days, for consumer drives. no way!
no way I'm trusting helium, either; since it escapes and makes the drive useless a few years down the line.
Re:Just in time. (Score:5, Funny)
you are better off with generation-1 than generation-current.
never trust the very leading edge. and, we're talking seagate, here; their enterprise drives are ok but I wouldn't touch them, these days, for consumer drives. no way!
no way I'm trusting helium, either; since it escapes and makes the drive useless a few years down the line.
But you'll be able to tell when that happens when your voice gets really squeaky.
Re:Just in time. (Score:5, Funny)
easily fixed with autotune...
Re: (Score:2, Interesting)
What? Seagate discs for consumers have been pretty much bullet proof according to what I've been able to find. I've got discs from them that are 15 years old that I scrapped for lack of capacity rather than failure and drives from them from many of the generations in between.
HGST are the ones doing helium, not seagate BTW.
May depend on the drive. (Score:5, Interesting)
I got a Seagate 3TB in a USB enclosure a year or two back.
Worked great for a year to a year and a half, then I started getting it randomly hanging. At first I assumed it was the usb interface going back, but upon removing the drive and directly plugging it into the system the same symptoms remained. Since it had been in an enclosure it hadn't supported SMART access to the drive. The SMART readings with the bare drive didn't show anything obvious, but actually reading from the drive would give read errors, and too many read/write errors over a certain period would cause the drive to hang, sometimes hanging the entire bus.
Long story short, it turned out I wasn't the only one having this problem, it happened pretty commonly across that entire serial line of drives, and there was neither firmware fix, nor warranty support for them (The enclosures only gave a 1 year warranty despite the drives having a 3 year warranty tag printed on them. The only thing I can figure is they figured out the entire batch was bad, about how long they'd last, and shoved them in a bunch of USB cases where they didn't expect anybody to find out.)
Having dealt with that drive, and reviews of them online, I'm going to be aversive to those, hitachi 3tb, and possibly WD 3tb for the forseeable future. Knock on wood, I haven't had ANY problems with 2 terabyte drives so far and given that another stepping of drives is coming out, we might see the later versions of the current-gen drives becoming mature enough to rely on for more than a year, which going off reviews doesn't seem statistically safe yet for this generation.
Re: (Score:2)
Re:Just in time. (Score:5, Insightful)
Yes, I also have very old Seagate drives with capacities from about 40 to 300 or so gigabytes that work fine. I also have a 5 gallon pail full of dead 1 terabyte drives that are 2-4 years old. I do IT consulting (mostly for small business) and the failure rate on the 1 terabyte and up drives has been hideous. I have been hammering on all my customers to do full drive image backups regularly - and to replace the backup devices as soon as they are over two years old. I'm generally not a hard sell guy, but I am pushing this, because I don't want them to be able to say they weren't warned when I have to charge them $thousands to get going again after a drive fails.
Re:Just in time. (Score:5, Insightful)
You mean you got hit by the 7200.11 bug and didn't do any research into it to discover that it's a firmware issue with a simple fix?
Re:Just in time. (Score:5, Informative)
You mean you got hit by the 7200.11 bug and didn't do any research into it to discover that it's a firmware issue with a simple fix?
Its simple to upgrade the firmware when you can still access the drive otherwise you have to jigger up a TTL level serial interface and send AT commands to unbrick the thing...lots of "fun".
Re: (Score:2)
Broken is still broken. Shipping a broken product is perhaps tolerable for a GAME but it simply shouldn't be tolerated for hardware. If the end user has to "patch" a piece of hardware then it's still an engineering fail and Seagate deserves every bit of grief anyone gives them.
Their 3TB drives in particular seem to implode at about 18 months.
Re:Just in time. (Score:4, Informative)
You mean you got hit by the 7200.11 bug and didn't do any research into it to discover that it's a firmware issue with a simple fix?
So you bought the Seagate company line about that? Either you never owned one of those drives or you were one of the lucky few that was eventually helped by the firmware fix. Although why you would wait around for many months for the 'simple fix' when you could get a refurb replacement immediately I don't know.
This is why a good PR firm is worth its weight in gold. It's okay to have a catastrophic production failure as long as you can retroactively convince the ones who didn't get burned that it was all just a big misunderstanding and was easily fixable with a simple firmware update. If only Hitachi had done so well with their infamous Deathstar drives.
So you believed their propaganda. Go back to the Seagate forums from that time and I think you will see that the so called "firmware fix" only fixed a small percentage of the problems with those drives. There was another fix that helped some people (more than with the firmware update) that involved removing the pc board of the drive and hacking the hardware yourself. I believe a soldering iron may have been required in addition to a particular sort of cable. I can't remember exactly but it was not a fix that most people would be able to apply and often it didn't work anyway. I had a 1.5 TB 7200.11 that I had been keeping for ages to eventually buy the cable and apply the fix but by the time I got around to maybe doing it 1.5 TB was a very small drive and I didn't care so much about the lost data anymore.
I had 6 7200.11s. Both 1 TB and 1.5 TB. Most failed in less than 6 months and then their replacements failed too. None of them work today. Not a single one. And your firmware fix could not be applied to any of my drives because it was not a firmware problem. At least with my drives. Yes a small percentage of 7200.11s did have firmware problems, but mostly it was a hardware unreliability problem. The click of death as well as drives that just refused to stay online for long. They'd just drop out. And all kinds of 'delayed write' errors etc. Those were not caused by poorly written firmware. They were 100% authentic hardware problems and Seagate shipped out countless new drives to replace the things on warranty which would seem like a rather expensive thing to do if all they had to do was update the firmware. But maybe you will say even seagate "didn't do any research" and was unaware of the "simple fix" you speak of.
Despite your convenient assumption about lots of 7200.11 owners being unaware of the too little and far too late 'fix' of a firmware update that didn't even work for most owners, I suspect that most found out about it when their drives started failing. A simple google search for '7200.11' and 'clicking noise' would eventually have gotten hits for the so called 'fix'. Of course it took Seagate forever and a day to even come up with that. I don't think they have ever even admitted that there was any sort of problem with the drives and by the time they came up with your so called "simple fix" most owners had already been burned pretty badly by their decision to go with Seagate. Before my 7200.11 I had been a big fan of Seagate. Nearly all my drives were Seagates. Now I don't care what name is on the drive. They are all incredibly unreliable. I have better luck with their refurb replacements usually.
Re:Just in time. (Score:5, Informative)
Unfortunately there is a common trend in the commercial world where a once-quality brand decides to cash in on it's reputation and sell low-quality crap and "We're a quality brand" prices. No doubt it boosts profit margins dramatically, for a while, but means the world loses another quality brand, and a lot of customers get screwed over. And sometimes it's a graduated process where the high end enterprise/boutique products continue to maintain their quality to prop up the brand, while the quality of the normal products falls off a cliff.
I haven't been following hard drives closely enough to be able to comment on Seagate's case, but I've seen it happen to far to many once-great brands to be even remotely surprised.
Re: (Score:2)
unfortunately, even voting with your wallet is out of the question these days since you only have a duopoly to choose from. i just hope ssds will soon catch up capacity-wise.
Re: (Score:2)
A duopoly? Has Toshiba collapsed as well without me noticing? Surely neither Seagate nor WD has gone under.
Re: (Score:3)
yes, a duopoly of WD and Seagate. at least in 3.5" disk market.
http://www.wdc.com/en/company/... [wdc.com]
3.5" toshiba drives are pretty much WD with Toshiba's firmware and branding.
Re: (Score:2)
I don't think they will. As SSD replaces HDD for day to day tasks HDD is replacing tape for longer term archive. HDD are going to move to slower and bigger while SSD is going to have to balance faster with bigger. The result will be many years of HDD having higher capacity.
Re: (Score:2)
No, it's means I didn't proof read and a homophone slipped through while my fingers were busy typing two sentences behind my brain.
Re: (Score:2)
We have a bunch of Seagate SV35 drives in a backup server. They started to get kicked out of RAIDZ one by one. Some show actual bad sectors (and were replaced since warranty was not expired), but others worked OK when tested using MHDD and the seller refused to replace then under warranty.
It turned out that those drives are so sensitive to vibration that dropping a coin on the PC case (with the drive secured in the drive bay) from a few cm height caused the drive to hang for about two seconds and emit a bee
Re:Just in time. (Score:5, Informative)
their enterprise drives are ok but I wouldn't touch them, these days, for consumer drives. no way!
There is no difference in reliability between "enterprise" and "consumer" drives. Those are purely marketing terms. The sole advantage of enterprise drives is a longer warranty. If you are bad at math, you might think that is a good deal.
Not the only difference. (Score:2)
Drives intended to go in RAID arrays have different firmware and handle errors differently.
They may also get different testing. I worked for a telecom equipment vendor and there were specific drives that had been tested for behaviour under high/low temperatures, high/low humidity, vibration, etc.
If you're a big enough company then drive manufacturers will actually work with you to resolve drive firmware issues and/or answer questions about specific behaviours on their enterprise drives.
Lastly, at least in
Re: (Score:2)
There is no difference in reliability between "enterprise" and "consumer" drives. Those are purely marketing terms
The statement you have made is an overly broad genralization.
There are a multitude of differences between the average consumer drive and the average enterprise disk drive, which affect operational reliability of the drive in various scenarios.
For a consumer drive; the reliability has to be measured as correct operation of a single disk drive in a consumer workstation.
For an enterprise d
Re: (Score:2, Informative)
Consumer disk drives cannot be substituted in while retaining the same level of reliability.
Thanks for your unsupported and unsubstantiated opinion. All the actual data says otherwise. If "enterprise" drives were actually more reliable, you would see them used in datacenters by companies like Google, Facebook, Yahoo, etc. But all of these companies use "consumer" drives in their datacenters. So does everyone else that believes data over marketing.
Re:Just in time. (Score:5, Insightful)
No. Go look at an upper mid-sized enterprise, and ask what kind of hardware they have running their Microsoft SQL Servers, their Exchange server, or their Oracle cluster.
What Google, Facebook, and Yahoo are doing is not relevant at the enterprise level. These are super-colossal cloud-scale companies, that are 3 orders of magnitude larger than Enterprise computing, not ordinary enterprises.
Enterprise hard drives are designed for Enterprise use, not Google or Facebook's cloud or HPC clusters.
These massive companies also have their own custom hardware built at their disposal. They are not using RAID arrays like most enterprises are using, and they essentially have massive farms of workstations instead of servers running their computational workloads.
At sufficient scale, you can achieve reliability from consumer disk drives for in-house applications, by designing your application around your components, BUT the major requirement is that you are in control of the application stack, so you can actually use the disk drives like you want --- and not have to stick them in a tightly-coupled RAID array.
The consumer disk drives are not sufficiently unusable that you can't work around the limitations by having thousands of them in a cluster, with terabytes of cache spread over 5000 computers, and some smart application logic doing what ordinary RAID subsystems cannot.
Re: (Score:3)
And yet some of those companies have published individual drive data showing the exact reliability. I suggest doing some reading on Backblaze's blog before you claim some mythical reliability advantage for a harddrive in some strange mid-tier solution. You'll find that reliability figures don't change between enterprise and consumer grade stuff.
Now while you're spitting out observations let's dig into that for a while. I'm building an SQL Server and I work for a large enterprise. Do I
a) dedicate my valuable
Re: (Score:3)
There are differences in firmware though when you compare enterprise 7200rpm drives to desktop 7200rpm drives - error timeouts for example, and caching algorithms. You can tweak the drives to change the timeouts and recalibration times to make desktop drives behave better in arrays but they are _not_ otherwise identical. Also, although you can throw a SATA drive on a SAS controller (I have such a setup at home) throughput in an array is generally much better with SAS drives. At home I edit the timeouts on
Re:Just in time. (Score:5, Informative)
I work for a very large storage array manufacturer. Warranty length is *not* the only difference...
Agreed. The price is also different.
For reliability, I prefer actual data over your anecdotal opinion: Consumer drives shown to be more reliable than enterprise drives [computerworld.com].
Re: (Score:2)
I'm not sure it's even fair to consider an ent sata a enterprise drive, if they were comparing to sas that would be something.
Re: (Score:2)
if they were comparing to sas that would be something.
Why would the interface change the reliability of the HDD?
Re: (Score:3)
Why would the interface change the reliability of the HDD?
Because a SAS drive is not identical to a SATA drive with a different interface attached.
Re: (Score:2)
but to segment the market HDD mfgrs put better electronics in SAS drives. better firmware too-.
I think this is BS. The marginal cost of "better electronics" is pennies. The marginal cost of "better firmware" is zero. So you are basically saying that manufacturers intentionally create defective products. In a competitive market (and the HDD market is very competitive) there is no reason they would do that.
If they actually did that, it would show up in reliability tests, rather than just in the totally made up MTBF figures printed on the box.
Also, they don't need to. They can just tell people that
Re: (Score:3)
For reliability, I prefer actual data over your anecdotal opinion: Consumer drives shown to be more reliable than enterprise drives [computerworld.com].
This probably has more to do with TLER than anything because consumer drives are designed with the expectation they'll be run as a single isolated disk whereas enterprise disks are typically expected to be part of some RAID array running in tandem with other disks the RAID controller can use to correct errors, so while an enterprise and consumer drive might share the same physical hardware the firmware for enterprise drives can differ significantly in the way they handle error recovery.
Re: (Score:2)
This probably has more to do with TLER than anything because consumer drives are designed with the expectation they'll be run as a single isolated disk whereas enterprise disks are typically expected to be part of some RAID array
Except that the referenced study shows that consumer drives are just as reliable as "enterprise" drives WHEN USED IN A DATACENTER. Nothing that you mention explains that.
Re: (Score:2)
TLER is useful in all cases, just that it is pretty much mandatory for RAID, so the drive manufacturers disable TLER on cheaper drives to prevent them from being used in a RAID.
Yes, in theory, non-TLER drive stands better chance of recovering unreadable data, but during that time the PC appears to be frozen, so the user just reboots it. Even with TLER of 7 seconds (default) that's still a long time. I do not know about others, but I'd rather my PC be responsive and report the error so I can either restore t
Re: Just in time. (Score:2)
Get the HGST "NAS" [amazon.com] SATA drives, if you don't need SAS - they are much better than the standard SATA drives, meant for real storage duties - pretty much what nerds need for home. Great ZFS performance. Three year warranty is good enough.
Re: (Score:2)
Thank you for clearly stating your biases. However, your statement is too vague to be either verified or taken into account in a decision-making process in any meaningful way. Either of these would require knowing at least some of the specific differences.
Re: (Score:3)
you are better off with generation-1 than generation-current.
I completely agree. I'm about to retire a rack of 1 TB drives in my NAS and replace them with three 4TB drives in a raid 5 array. The 4TB drives will had to be out a year before I started to trust them.
Live on the bleeding edge with shit your not afraid to lose. Trust your important shit with well tested 2nd or 3rd generation technology.
Re:Just in time. (Score:4, Informative)
Don't use RAID5 with drives over 1TB.
a) a RAID5 rebuild takes many hours, b/c it involves reading the entire disc.
b) drives from the same production batch tend to cluster failures.
c) I recall reading that the uncorrectable read error rate tends towards the 2TB mark.
That is, chances are very good that a single drive failure will become a 2-drive failure during a rebuild.
RAID6 or nothing.
Re: (Score:2)
The RAID-5 is not set in stone. RAID-6 is an option that has not been ruled out, odds are I will go that route.
I know about the drive failures in batchs like that. I've been bitten by it before. I usually buy drives from different sources weeks apart. That increases the odds that the drives will come from different batchs. I don't if that affects the reliability of the drives themselves but makes me feel better.
Re: (Score:2)
I recall reading that the uncorrectable read error rate tends towards the 2TB mark.
12.5TB, assuming the specified 1-in 10^14 bit uncorrectable-read-error rate specified for most consumer drives is accurate. I certainly don't see rates anywhere near that high with my consumer drives, but I could just be lucky.
Re: (Score:2)
The question is though, what method are you using to test for these errors in the first place? How do you KNOW there has not been a read error occurring within the discs? This is a big reason why ZFS exists. http://en.wikipedia.org/wiki/D... [wikipedia.org]
Re: (Score:2)
Good, because these don't have Helium.
Re: (Score:3)
Their consumer drives have gone to absolute shit. I was buying them because they were marginally cheaper than the other choices. I ended up with a couple dozen running over the period of about a year. As each matured to about 1.5 years old, they started dying. Seagate reduced their warranty for consumer drives down to 1 year, so now they're all paperweights.
I guess they're ok, if you want to build a computer that you only want to use for 1 year. Maybe building out a machine for someone you don't
Re: (Score:2)
Re: (Score:3)
With 8 TB drive sizes I would think you would want double parity and some kind of hotspare. The rebuild times on that could be glacial.
Re:Just in time. (Score:4, Informative)
Crow, listen to this guy. Assuming these things have 100MBytes/sec write speed, a simple RAID-1 will take over 22 hours to rebuild.
If you want 8TB of usable space, get 4x4TB and RAIDz2 (i.e. RAID6) them. Even if it's disposable data, the data must be of sufficient use to justify a FreeNAS build over a simple external. It's worth your time to do it right.
Re: (Score:2)
The average read/write speed of this drive is 150MB/sec with a maximum sustained read rate of 190MB/sec. See http://www.seagate.com/files/w... [seagate.com]
Assuming only the average read/write rate it would take 14 hours and 48 minutes to simultaneously read from one drive and write to another.
8*1000*1000/150/60/60=14.81 hours
Re: (Score:2)
Yeah, assuming you're not doing anything at all with the array while it's rebuilding, and none of the sectors have been remapped causing seeks in the middle of those long reads/writes.
To throw out one more piece of advice; RAID6 is useless without periodic media scans. You don't want to discover that one of your drives has bit errors while the array is rebuilding another failed drive. RAID6 can't correct a known-position error and an unknown-position error at the same time. raidz2 has checksums that sho
Re: (Score:2)
The slower speed can't get you lower power there, the drive is slow when re-writing because due to the tech used it has to do some copy/delete/write stuff very roughly similar to having to erase a whole block of flash to write a single logical 512 byte or 4096 byte sector.
If you mostly store large stuff that doesn't get deleted or don't care about the possible reduction in write speed, it's still fine to get that drive. (good at recording TV stuff you intend to keep, not that good if you're continuously rec
Re: (Score:2)
Ok, let's hear all the stories how Seagate sucks (Score:5, Funny)
and then let's hear about how it's all anecdotal evidence.
Then someone will bring out the backblaze survey.
Then someone will say "They've never had a problem with Seagate, but WD sucks."
Then someone will lament how IBM no longer makes drives. Then the deskstar stories will start.
In other words, the same responses every time a hard drive story is posted.
Re: (Score:2)
It's not deskstar but deathstar I was told...
As an anecdote, my first HDD to ever fry was a 8GB deskstar. I lost everything. Now I have backups and raid. Many failures later (at least 3) I've yet to lose a single bit I deemed important.
Re: (Score:2)
Then the deskstar stories will start.
Hey, that's one of the classics! SonyBMG rootkit, removal of OtherOS, and the Deathstar hard drives. Two decades of ranting excellence!
Re: (Score:2)
Boy I was about to post a Backblaze survey concerning enterprise vs consumer drives. I am so glad I waited until I read your post.
Re: (Score:2)
That started seventeen minutes ago: http://hardware.slashdot.org/c... [slashdot.org]
Get with the times already.
You can't say "Get with the times" in a comment where Slashdot was scooped by ZDNet [zdnet.com] five days ago. That ship sailed.
How many days will it take . . . (Score:2)
Re: (Score:2)
You're missing some zeroes in there somewhere.
It does take a long time to "rebuild" after a failure of a large drive. There's no denying that. It's silly to even try.
What I would despute is the idea that one of the other drives will magically fail during the rebuild if it wasn't already showing signs of dying.
Due to Seagate's current quality levels, I have some personal firsthand experience with rebuilding large RAID arrays.
It's not quite as scary as the fearmongers would have you believe.
I guess not. (Score:2)
Archive? (Score:4, Insightful)
Re: (Score:2)
Re:Archive? (Score:5, Informative)
Re: (Score:2)
Yep, I remember watching some Linux conference about upcoming hot new SMR tech (2 years ago?), and I think they said those drives are read-modify-write on every write, that is the price you have to pay for huge capacities.
They are targeting long term storage, and will be useless for desktop/server use.
Re: (Score:3)
Re: (Score:3)
XFS is not a copy on write filesystem, by the way. ZFS and BTRFS should work definitely work better, but they might need some internal tweaks to make the best use of it.
Re: (Score:2)
Re: (Score:2)
The average read/write seek time of this drive is 12ms. That seems quite usable to me for multiple use cases.
reference: http://www.seagate.com/www-con... [seagate.com]
Re: (Score:2)
It looks to me like these drives write a large amount of data as a spiral of multiple tracks so that the platter must rotate many times to complete the write.
That's fine for streaming data sequentially to the disk for long term storage.
Random writes must be dog-slow, though.
sequential + idle time for garbage collection (Score:2)
Others replied mentioning it's because PMR is mostly useful for sequential writes, not random. That's true, and also the drive needs idle time between writes for garbage collection and remapping. It therefore fits the for daily backups, which are sequential and provide the drive time to garbage collect before it's used again.
It's less suited to something like storing security footage, where is has to record 24/7. Unless of course the recording software is specifically designed for PMR drives and writes
Fun times (Score:3)
These drives are targeting more or less the same market. And judging by the number of complains, WD's 4 and 6TB drives are not much better in reliability department (although I might be wrong in that regard)
Re: (Score:2)
I've never seen as many bad drives as the 3TB WD Greens, but about 80% of mine are still working fine, and I only had to replace one early. The oldest now has nearly 30,000 power-on hours.
Re: (Score:2)
I've never seen as many bad drives as the 3TB WD Greens, but about 80% of mine are still working fine, and I only had to replace one early. The oldest now has nearly 30,000 power-on hours.
The 2 year warranty should be enough of a warning to all to stay far away from "green" variants.
What can one expect to happen when drives are constantly speeding up, slowing down and parking heads?
What's even worse there is no meaningful difference in power consumption between black and green drives. If you want to save power get a 2.5" drive... any environmental impact is by far offset by reduced lifespan and energy cost of production.
Re: (Score:2)
The 2 year warranty should be enough of a warning to all to stay far away from "green" variants.
I have smaller Green drives that have been running 24/7 for five or six years with no user-visible problems. A couple of them are reporting 1 or 2 bad sectors, but that's it. None of the smaller Green drives have yet failed, with up to 45,000 power-on hours. But maybe I just got unlucky with the 3TB version.
What can one expect to happen when drives are constantly speeding up, slowing down and parking heads?
They don't constantly speed up and slow down, and I disabled the head parking.
What's even worse there is no meaningful difference in power consumption between black and green drives.
There's a few watts if you have a bunch of them in a RAID. And there's a significant difference in price, or was when I last
Re: (Score:2)
Perhaps the way most mail-in rebate deals make money - most people end up being too lazy to actually send in the rebate form. Or don't have exactly the right paperwork to qualify: We're sorry - the packing/price list which clearly says RECEIPT at the top does not qualify as a receipt for the purposes of claiming this rebate. Oh, and by the way the rebate-claiming window is now closed, so don't even bother trying to get a "real" receipt from the seller to try again.
And it's no doubt helped that warranty pe
Re: (Score:2)
The thing is the newest high density hard disk technologies have lower reliability regardless of which particular supplier you choose even if some are worse than others. I particularly hate Seagate.
Yay! (Score:2)
Now my backups can disappear because my Seagate "Archive" drive took a sh*t 2 years after I bought it.
Seriously. I just went through a stack of 5 Seagate HDDs, from different customers, with a sledge hammer. They all died with S.M.A.R.T. failures.
I wouldn't trust Seagate with my data unless I *wanted* it to self-destruct.
Re: (Score:2)
Re: (Score:2)
Seagate went in to the junkpile ever since they bought Maxtor.
Re: (Score:2)
The moral is that I will never, ever trust that company again. It's a shame, since they used to be (mid-90s) the best in the industry, imho
That's funny; back around 1990 I knew some people who ran a huge BBS (a whopping 2GB online!!), and they absolutely hated Seagates.
It seems like many of these companies go through phases.
Can I buy one? (Score:2)
Or is it like the current 8 and 10tb drives that only seem to exist at the fantastipotamus store?
Shingled encoding performance penalties (Score:5, Interesting)
Re: (Score:2)
It's handy that modern filesystems are mostly copy-on-write anyway.
Re: (Score:2)
these are WORM drives (Score:5, Informative)
Write Once Read Mostly
Shingled media is almost useless for random access, since rewriting a logical block means relocating its entire "shingle" strip somewhere else., then, at some other time, garbage-collecting the entire region and relocating the still-in-use blocks. You definitely want to run these "noatime", to prevent thrashing directory blocks, and they should probably have a new filesystem designed for them.
Some have tried tinkering with flash filesystems due to the "copy/invalidate/garbage collect" and the LBAs are gathered in some larger storage block in no particular order, and that storage block needs to be managed. Don't know if Seagate will tell us what the size of a erase block (a set of overlapping, concentric "shingles", which have to be collected as a group) really is, or if they'll even be a consistent size.
If you're streaming from them, you may hit "garbage collect" long access times, and I don't know what proprietary commands and settings may be available, if any, to tell the drive "now is a good time to do housekeeping".
As "archive media", shingled drives probably work OK, since that is a WROM application, but, personally, I would NOT use them on any existing file system.
Re: (Score:2)
Mod up. The one clueful post to the article.
Re: (Score:3)
From the summary:
(emphasis mine)
Re: (Score:3, Funny)
> So...it's magic?
Could be. Or, it could be sufficiently advanced technology-- it's hard to tell.
Re: Helium and the density of the disc (Score:4, Funny)
fucking helium, how does it work
Re: Helium and the density of the disc (Score:5, Insightful)
No, straight Helium is a lower density medium than normal air. This means less atmospheric friction and less driver motor friction while spinning a platter.
Since the individual platter assemblies run cooler, they can pack them closer together (and put more in a given drive casing).
Also, because they have to hermetically seal a platter assembly into the helium atmosphere, with some modifications, such drives can be used in full-immersion cooling, where normal air-cooled drives need to breathe.
Re: (Score:2)
No, straight Helium is a lower density medium than normal air. This means less atmospheric friction and less driver motor friction while spinning a platter.
Wouldn't it be better to just create a vacuum inside the drive? The vapor pressure of aluminum at 10^-10 torr is 600 C.
Re: (Score:2, Informative)
The disk heads use lift generated by the air, or gas, in the drive to float above the platter. Under vacuum, the heads would scrape against the platters and would shortly render the drives unusable.
Re: (Score:2)
No, straight Helium is a lower density medium than normal air. This means less atmospheric friction and less driver motor friction while spinning a platter.
Wouldn't it be better to just create a vacuum inside the drive? The vapor pressure of aluminum at 10^-10 torr is 600 C.
No. Because a vacuum would actually insulate the components. As there's nothing for heat to disperse through EXCEPT other components.
Re:Helium and the density of the disc (Score:5, Informative)
"Reduce friction" is pretty close, actually.
The platters spinning around causes a lot of air to move around, as well. If that air is helium, the effects of the turbulence are less forceful, so moving parts don't need as much buffer space between them.
The individual platters don't change density, but since they can be packed closer together without aerodynamic damage, there can be more platters in a single unit.
Re:WD, SG unreliable..but (Score:4, Informative)
There are only 3 hard drive makers [wikipedia.org] left. Hitachi is not one of them.
Re:WD, SG unreliable..but (Score:4, Informative)
Not entirely correct:
http://www.theregister.co.uk/2... [theregister.co.uk]
They are owned by WD, but they are still a manufacturer in their own right. For the moment.
Re: (Score:2)
I've never had any problems with WD, but fully agree about Seagate drives. They must have a built-in self-destruction device.
Re: (Score:2)
Raw 4K video footage, easily.
Re: (Score:2)
Backups. Lots and lots of backups. Too bad I *just* bought a 2 TB disk for that purpose a few days ago.
Re: (Score:2)
Steam games, will last about a month or so before i have to ad another. :P
Seriously modern games are nearing 100GB.
Re: (Score:2)
...all of the multimedia I have collected since acquiring my first PC clone.
This includes any CD, DVD, or BD that I buy.
All of the convenience of streaming media but none of the downsizes like network outages or license revokation or just plain lack of availability.
Re: (Score:2)