Seagate Says 16TB Hard Drive To Hit Market Within 18 Months (techspot.com) 232
An anonymous reader shares a report: If you haven't shopped around for hard drives in a while, you may be surprised at what's out there. The largest 3.5-inch desktop hard drives currently available from Seagate, for example, offer a whopping 10TB of capacity for less than $500. In the event that 10TB isn't quite enough storage and a multi-drive setup isn't ideal, you'll be happy to hear that Seagate over the next 18 months plans to ship 14TB and 16TB drives. A 12TB HDD based on helium technology is currently undergoing testing and according to CEO Stephen Luczo, initial feedback is positive. Most enthusiasts and even some PC manufacturers are now using solid state drives as their primary drive due to the fact that they're much faster and more power-efficient. What's more, because they have no moving parts, SSDs generate no noise and are much more durable.
Great! (Score:5, Funny)
Re: (Score:2)
Put all your eggs in one basket... and then watch (ie backup or mirror) the basket very carefully.
Re:Great! (Score:5, Interesting)
How long will it take to rebuild a raid array with discs that size? Even with only raid 1 I'd think the times would be horrendous.
Re:Great! (Score:4, Funny)
disks*
Re: Great! (Score:2)
Re: Great! (Score:2)
Re: (Score:3)
Re: (Score:2)
Indeed, and sometimes you can't even correct the errors. The raid doesn't know anything about higher level items like file systems, so restoring a dump won't save you. It has to be repaired at raid level, which often means destroying and recreating the entire RAID.
With the amount of data today, RAID 6 or three-way RAID 1 is almost a must. With 16 TB disks, that means a minimum of 32 TB spent just on parity or copies. I'd much rather have more and smaller drives, even if that means extra enclosures.
Re: (Score:2)
Re: (Score:3)
Re: Great! (Score:4, Informative)
Please provide a link where I can buy a cheap 16TB tape drive. Even an LTO-7 is too small, so you have to play tape jockey, and the tapes cost about the same per TB as the disks. And that is after you find the extortionate amount for the drive.
Tape possibly makes sense if you can afford an autoloader. HP has a LTO-6 autoloader for $4,239.99 that will do 20TB really (50TB fake). It will, however, only backup/restore 560GB per hour. Let us hope you have a slowly changing dataset and incremental backups are your thing...
Re: (Score:2)
I use btrfs. So long as you avoid RAID5-a-like mode it works very nicely. I wish RAID5 were fixed though.
Use ECC RAM with ZFS! (Score:3)
Install ZFS on your Linux box.
If you're going to do this for anything other than experimental purposes, be sure you're using ECC RAM. You probably aren't and your current board most likely doesn't support it, so you'll need to get a new motherboard, possibly a new CPU and new RAM.
Re: Great! (Score:4, Informative)
For these large drives you really want something like snap raid for their use cases. Large media stores backups and other bulky and rarely changing datasets are perfect for it. Not to mention that since data on any single drive is coherent you can loose more than parity can correct and still only lose the files with errors blocks or the content of that one drive were it to completely fail.
Right now using 8tb drives as it's the best price per gb.
Re: (Score:2)
So that I do not need to buy new drives every time I want to add a file.
Also, when there is not that much free space left, zfs fragments badly, resulting in slower speeds. A slightly bigger hard drive (or more of them) is usually cheaper than putting the entire storage on SSDs, especially if I do not need the SSD speed.
Re: (Score:2)
Re: (Score:2)
I'd assume it would fill in less than 2x time, since it's likely able to write faster given it's putting the data in a smaller physical space. Likely more tracks, so it should likely increase, but maybe 50%?
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
How safe is it to run the drive 100% busy for an extended time period? I've always heard that's a bad idea with consumer drives; they just aren't built to withstand the workload.
Re: (Score:3, Insightful)
On SATA? Days to weeks would be my guess. With drives this size, RAID-6 isn't even enough. It really needs triple parity, especially with drive arrays that contain 8-10 drives, or with 12+, quad parity.
I'd like to see drive makers focus on reliability. Aerial density is quite high these days. Why not build in two different drive heads that can work in an active/active configuration (some drives about a decade ago had this ability), more ECC, bit-rot resistance, and more resistance to shock and vibratio
Re:Great! (Score:4, Insightful)
I'd like to see drive makers focus on reliability.
They do. HDD reliability has been going up for a long time. Some brands and models are much more reliable than others. Google, Backblaze, and others have published longitudinal data about that. The MTBF printed on the packaging means absolutely nothing. If you care about reliability, then check reliability data, and stick to the "one-back" rule and don't buy bleeding edge hardware.
Why not build in two different drive heads that can work in an active/active configuration
Because customers that need high speed non-consecutive I/O have mostly moved to SSD.
The only reason to use HDDs is because they are cheap. So anything that adds to the cost, just pushes more customers to SSDs.
Re: (Score:2)
Re: (Score:3)
Operate 1000 drives for 100 hours and count the failures and do the math and you get MTBF!.
No! That is NOT how MTBF is calculated. Here is how it is calculated: Engineering designs a HDD. Manufacturing builds it. Then the marketing department decides on what MTBF to print on the box. They want three price points: good, better, best. But it is not cost effective to design and manufacture three different drives, so they actually only design one, and the drives sold at each price point are identical except for the MTBF printed on the box, and the warranty.
Longitudinal data has repeatedly sho
Re:Great! (Score:5, Informative)
So a straight sector-by-sector (sequential) copy of 16 TB drive to another 16 TB drive would take 16000 GB / 250 MB/s = 64000 seconds, or just under 18 hours.
ceph with smaller disks over 3 or nodes (Score:2)
ceph with smaller disks over 3 or nodes
Re: (Score:2)
If you read the original comment, it was "Now I can lose even more data when a single disk crashes", in which case RAID is a perfectly valid answer.
Read and stop your nonsense.
Re: (Score:2)
Re: (Score:2, Offtopic)
All of them? (Score:2)
Point me to a local backup solution that can handle 16TB in a single go.
Pretty much anything?
But mainly it will all work because most backup systems are incremental. I can easily maintain a backup for a 16TB drive because week to wee I'll not have 16TB to back up...
I back out to an offsite drive about once a month. Even then I'll have perhaps 500GB to transfer, which is easily manageable in an evening.
Point me to a cloud backup solution that can handle 16TB
But that's a problem today, and is irrelevant to
Re: (Score:2)
Yes , of course but... (Score:2)
You still need an initial snapshot on which to base the incremental diffs though.
Yes, and???
It's not like you are going to have 16TB on day one, is it? So then you just have the initial time to move whatever to a newer drive, then incremental costs after... In fact you do not HAVE to have a 16TB capability on the backup until the content you are backing up reaches that point. I've taken that approach before, which is a good idea as it staggers out the drive purchase baking it more likely you get drives fr
Re: (Score:2)
Point me to a local backup solution that can handle 16TB in a single go.
tar, dump and xfsdump all handle 16TB sizes in a single go. No special software needed.
What kind of medium?
You need three 6 TB tapes, or another 16 TB drive.
With six tapes or two HDDs, you can do tower-of-hanoi incremental backups for quite a while, avoiding having to do full backups, without the restore time growing out of bounds.
With nine tapes or three HDDs, you can also have redundancy against backup media failures.
Re: (Score:2)
My point is, the average PC user has no simple way to back up this data. I am well enough versed with tar that I could do what you say, but I still have to have this tape drive and I still have to change tapes. One of the biggest reasons to avoid backing up to a second identical disk in the same computer is vulnerability of that disk to site problems that take out the computer, to theft (as it's in the PC itsel
Re: (Score:2)
> Point me to a local backup solution that can handle 16TB in a single go.
Are you kidding? Something as primitive as rsync can manage that. The real problem is that it will just run forever. If you have a good OS, a batch job that runs for days is not really a problem.
Re: (Score:2)
/. Editor: Crappy Summary (Score:3)
Why are the two ending sentences there on SSDs?
It made the summary confusing and off point.
how much porn is that? (Score:3)
and will it be enough for the digital hoarders out there?
Re:how much porn is that? (Score:4, Funny)
11Mpussy if you use MKS units. Slightly less if you use imperial.
Re: (Score:3, Funny)
For the less technical inclined, that is about 350 MWanks.
Too expensive... (Score:3)
Re: (Score:2)
I'm on a similar schedule - though I went for ~$100 each for 3TB drives since I have (presumably) higher storage needs and only 4 SATA ports available for RAID. The low end hasn't dropped fast enough (and I need more storage) and you get more bits for your buck at the higher price if you need it.
You'll be lucky if even 3TB drives hit the $50 mark in the next 3 years. In fact, the 3TB drives I bought almost 2 years ago are still over $80.
Re: (Score:2)
You'll be lucky if even 3TB drives hit the $50 mark in the next 3 years.
1TB+ SSDs will probably be more affordable in the next few years.
Re: (Score:2)
The mid-range flash cells have been stagnant on price for a while. The 250GB Samsung 850 EVO spent most of 2015 and 2016 at $90. Now it's $100. Sure there are a lot of cheaper options, but at this rate of change, I don't have much hope for 1TB coming down in price any time soon.
Re: (Score:2)
I take that back. 2015 was the 120GB model at that price, but 250GB was still under $140 back in 2014.
Re: (Score:2)
Re: (Score:2)
They are externals, but easy to disassemble.
That might be fine for single drives. For a RAID configuration, you want identical drives under warranty. Popping open an external drive probably voids the warranty.
Re: (Score:2)
For a RAID configuration, you want identical drives under warranty.
No. You want identical in size and performance, but not identical drives if you can help it. Certainly not drives from the same production run. Good system providers will shuffle drives so you get different ones, or at least from different production runs.
The reason is that you don't want drives failing at the same time. From a drive has failed until a hotspare has been fully populated and tested, you're the most vulnerable.
Re: (Score:2)
You want identical in size and performance, but not identical drives if you can help it.
When I replaced the hard drives in my home file server, I bought four locally over a six month period when they were on sale and the store had to restock each time. Two from Newegg about nine months apart. All the serial numbers are quite different.
From a drive has failed until a hotspare has been fully populated and tested, you're the most vulnerable.
Not with a RAID-6 configuration (a second hard drive would have to fail). I also have a full backup on a separate hard drive.
What are the use cases for these drives? (Score:2)
As many a wag has pointed out, that a 16TB drive means that there is more of your data to lose in a crash. I also have to think that the latency for finding specific files on the drive - especially in a server - is going to be a concern.
I guess for the home user, this might be a great way to store 100 or more Blu-Rays for streaming around the house but I have to wonder if these drives are reaching sub-optimal sizes for server farms/cloud based storage.
Re:What are the use cases for these drives? (Score:5, Insightful)
a 16TB drive means that there is more of your data to lose
In most cases, if you can fill a 16 TB disk, that data isn't actually yours.
Re: (Score:2)
Re: (Score:3)
It's not a big deal if you don't do it all at once. It's like using iTunes or Netflix but without the network or the problem of things "going away".
Re: (Score:2)
I probably could, not that I would... but I'm sure 4 IR security cameras @1080p recording 24/7 wouldn't take that long to fill up 16TB.
Re:What are the use cases for these drives? (Score:5, Interesting)
I store my physical CDs as straight up
Re: (Score:2)
Re: (Score:2)
> In most cases, if you can fill a 16 TB disk, that data isn't actually yours.
You're projecting. You're the thief and you think everyone else is.
Re: (Score:3)
Re: (Score:2)
for when you download every movie, song and tv show there is even though it's impossible to actually consume it all within a human lifetime but you still do it because it makes you feel superior
And you still can't back it up (Score:2)
Where are you going to put that kind of data, should you manage to fill one of these? In the cloud? No way, and your ISP will love the data cap overage charges if you try. Another drive? Well, unless you buy at least three of these then that will get expensive fast, requiring multiple older drives per one of these. Tape? Have you looked at LTO or similar prices? Not gonna happen for home users, even most businesses. So, when your rust stops spinning and the data is at rest, where do you turn?
Re:And you still can't back it up (Score:5, Insightful)
Buy 2
Re:And you still can't back it up (Score:5, Funny)
Re: (Score:2)
Just in case you aren't joking, that's a *really* bad idea.
Re: (Score:2)
This won't be made primarily for _you_, its real value is _in_ the cloud. Lower overall power usage in a high-density environment, and in spite of what some might think, even a high cost drive will save money when you scale out, as long as its benefits can be felt on that scale, (lower wattage, better rack utilization with more TB per U, fewer individual points of failure per PB, lower overall cost per GB on the PB scale, (and probably on the TB scale as well)).
Re: (Score:3)
Where are you going to put that kind of data, [...] Another drive? Well, unless you buy at least three of these then that will get expensive fast, requiring multiple older drives per one of these.
Well, My use case makes this what is likely to happen.
I'll drop one of these in the system and it will act as the WORM drive for bulk data.
As the data is created it is written to smaller/faster disks (still spinning rust, whatever 2.5" is cheapest/gig, or even previously used drives that have been tested clean). Once a dataset is complete it will be written to the WORM drive, once the smaller disk is full it is pulled from the system, put on the shelf and a new blank put in in it's place. Instant offline
Re: (Score:2)
Lots of people have terabytes in the cloud. I've got about 4TB backed up (encrypted of course). Took a few months to upload.
Re: (Score:2)
Sigh (Score:2, Funny)
Re: (Score:3)
Re: (Score:2)
Yep, mine too was a 20MB and I was all excited when I got it. It cost me 600$ back then too!!!
Re: (Score:2)
The first in my house was a 286 w/ ST 225, but my first that was *mine* was an older 8088 with an ST512 FH 5MB disk. I was so f-ing proud of myself for that machine (built with hand me down parts and bits I bought/was given at the old swap meet I went to).
Re: (Score:2)
Mine was a 286 without a hard drive. We called loading anything on that machine doing the "diskette disco".
We bought a 40MB hard drive some years later, as an upgrade.
Re: (Score:2)
Home built Apple ][ (6502) with a cassette tape.
Makes No Sense (Score:3)
Most of the comments so far seem to be about 16TB being a bit on the ridiculous side for PCs and even small servers, etc. What these are exciting for aren't RAID or traditional PC's but for high density storage for Big Data, which typically doesn't use RAID, and generally only looks at SSDs as a "hot tier" solution. 16TB spindles sound great to me, but I'd never stick one in my home PC.
Re: (Score:2)
Slightly off-topic: I want "WORM SSDs" for backup (Score:4, Interesting)
I'd love to see someone come out with a cheap, trivial-to-use "WORM* USB stick" along with "plug and play" backup software.
Such backups would be impervious to being over-written by ransomware. If using them became commonplace, it would cripple that industry.
Such media could also be used for security systems or any other kind of data-logging system: Record everything to write-once media (along with a copy of recent data to a cached journal, so changing media doesn't cause interruptions).
There is a good business case for this: It provides a nice "give away the flashlight, sell the batteries" profit center for vendors: People would need to replace the USB sticks when they filled up. The key is that it will have to be no more expensive than ordinary USB sticks of the same capacity.
Before you mention "data retention/deletion policies" I'm envisioning this for home users and some types small businesses, not large businesses or those subject to government-driven data-deletion policies.
----
* By "WORM" I mean the actual hardware/firmware enforces the write-once aspect, not just a USB stick with an OS-level device driver that makes it "write once." This should actually be cheaper to manufacture than typical USB sticks since you would not need to provide "erase" circuitry nor would you need to have wear-leveling logic in the device's firmware.
Re: (Score:2)
Another case, while we're at it.... datacenters.
This would be cool for archival or cold tier storage solutions, where the data is flagged as having some acceptable degree of permanency is moved onto these WORM devices. I can think of all sorts of applications - financial, backup, legal, content libraries with immutable data, (like old documents, manuals, videos, etc.).
You could focus more on read speeds and less on write issues, and while I'm no expert, I imagine there are plenty from an engineering point
Re:Slightly off-topic: I want "WORM SSDs" for back (Score:4, Interesting)
This should actually be cheaper to manufacture than typical USB sticks since you would not need to provide "erase" circuitry nor would you need to have wear-leveling logic in the device's firmware.
Former Flash validation engineer here...
Sadly not the case. The erase circuitry will still be needed if only so you can adequately run test patterns on the parts. Have to return the device to 0xFF's after testing so your customers can use it.
That said, there is the ability to disable erase in the field by setting a bit in the FACS array as the last step of testing.
Re: (Score:2)
That said, there is the ability to disable erase in the field by setting a bit in the FACS array as the last step of testing.
For all practical purposes, is this an irreversible step?
If not, I would prefer some other method, such as cutting a trace or burning out a fuse so that the drive was guaranteed to be "write once, erase/delete never."
For "forensic" purposes, "guaranteed non-erasure" is a hard requirement.
Re: (Score:2)
Re: (Score:3)
I'd love to see someone come out with a cheap, trivial-to-use "WORM* USB stick" along with "plug and play" backup software.
You may be waiting a while. Flash isn't cheap enough and it has data retention problems. Phase change memories (of which 3D Crosspoint seem to be a variant) also have difficulties with long term retention. If you don't need it to be a USB stick, WORM behaviour is a commonly available in optical storage media, including Blu-Ray.
Now it's helium-filled drives. (Score:2)
But when will we see hydrogen filled drives?
Re: (Score:3)
Re: (Score:2)
But when will we see hydrogen filled drives?
Never. The drives would just hover away :-)
Re: (Score:2)
Re:Now it's helium-filled drives. (Score:5, Funny)
My friends speak quite highly of helium.
MTBF? (Score:2)
4 days?
Sorry, but it seems like the bigger Seagate drives get, the less reliable they become.
Re: (Score:2)
My U160 SCSI Cheetah 15K7's on a Mylex controller. For speed that's still a hard combo to beat.
Ouch. I didn't think people still use parallel SCSI.
Been 20+ years since I worked on FW for those.
Re: (Score:2)
My U160 SCSI Cheetah 15K7's on a Mylex controller. For speed that's still a hard combo to beat.
Yea but who worries about speed from spinning disks anymore, outside of some enterprise uses and those are shrinking fast as cache and solid state storage are both quickly dropping in price. If you need speed, you go SSD.
Re: (Score:2)
No really it isn't.
To read something from a hard drive you have to seek to the right track and wait on average half a rotation for it to come under the head. So a 15KRPM hard drive maxes out at under 500 IOPS. A raid array can help a bit provided the host can queue up enough operations at once that all the drives stay busy.
15K RPM hard drives have been basically squeezed out by falling SSD prices. The SSDs now offer a comparable cost per gigabyte and far higher performance.
Re: (Score:2)
Relatively cheap long term storage is the only reason I am aware of.
Re:Still using (Score:5, Informative)
Re:Still using (Score:5, Insightful)
Actually, deal prices for HDDs have yet to drop below ~$30 a terrabyte. This is 2010 era pre-flood/pre-consolidation prices. I haven't seen a price for a new drive from a quality brand dip below that.
While I've seen SSDs hit $200/terrabyte. So the price delta is 6-10x at this point. It's rapidly shrinking.
Re: (Score:2)
So the price delta is 6-10x at this point
That's a pretty good reason to buy spinning disks. By strange coincidence I actually bought one early this aternoon for backups. My plan is to buy two in the 5-6TB range from different manufacturers and use rsync with --link-dest for incremental backups.
Re: (Score:2)
You realize that spinning disks that size are "archive" only usage. Not actual usage. By that measure, tape is really cheap. There is a reason why you hardly see that any longer.
Re: (Score:3)
You realize that spinning disks that size are "archive" only usage. Not actual usage. By that measure, tape is really cheap. There is a reason why you hardly see that any longer.
Typically "archive" hard drives mean that they have relatively poor performance, not that that they're bad. For instance, if you're looking to put together a NAS to store a bunch of media like TV shows or movies, they're just fine. You're going write infrequent changes, mostly when you're adding new content, and you're going to read sequential streams, both of which archival drives are just fine for. That's actual usage. They are not intended for, say, write once, then store in a closet offline for year
Re: (Score:2)
Bake your SSD in an oven (Score:3)
Strange that the discrete 800 degree heating units haven't been integrated AFAIK. However, 250 degrees in an oven for a day fixes most of them.
http://www.bbc.com/news/technology-20579077 [bbc.com]
Re: (Score:2)
Wear-leveling makes "deletion" permanent
That was always a bug and not a feature. If you want a backup, set up a backup.
Re: (Score:2)
They don't start to die... they just die.
Platter drives will start to give you problems, at which point you can usually buy another drive and transfer your data.
If an SSD goes, it's gone. poof. I had it happen on a work laptop once. That was about 6 years ago, so maybe they have gotten a little more reliable since then. At home, on my machine, it's all platter drives.