Intel SSD Roadmap Points To 2TB Drives Arriving In 2014 183
MojoKid writes "A leaked Intel roadmap for solid state storage technology suggests the company is pushing ahead with its plans to introduce new high-end drives based on cutting-edge NAND flash. It's significant for Intel to be adopting 20nm NAND in its highest-end data center products, because of the challenges smaller NAND nodes present in terms of data retention and reliability. Intel introduced 20nm NAND lower in the product stack over a year ago, but apparently has waited till now to bring 20nm to the highest end. Reportedly, next year, Intel will debut three new drive families — the SSD Pro 2500 Series (codenamed Temple Star), the DC P3500 Series (Pleasantdale) and the DC P3700 Series (Fultondale). The Temple Star family uses the M.2 and M.25 form factors, which are meant to replace the older mSATA form factor for ultrabooks and tablets. The M.2 standard allows more space on PCBs for actual NAND storage and can interface with PCIe, SATA, and USB 3.0-attached storage in the same design. The new high-end enterprise drives, meanwhile, will hit 2TB (up from 800GB), ship in 2.5" and add-in card form factors, and offer vastly improved performance. The current DC S3700 series offers 500MBps writes and 460MBps reads. The DC P3700 will increase this to 2800MBps read and 1700MBps writes. The primary difference between the DC P3500 and DC P3700 families appears to be that the P3700 family will use Intel's High Endurance Technology (HET) MLC, while the DC P3500 family sticks with traditional MLC."
Don't expect too much from Intel... (Score:5, Interesting)
I tried to find out for a large customer how long the current enterprise SSDs live, but Intel declined to comment. Through the grapevine I have heard of people doing complete replacements every 6 months to prevent failures in production environments, after they learned the hard way that these are not as reliable or long-lived as many people think. Especially small-write endurance seems to be pretty bad.
Re: (Score:3, Interesting)
I've used them at a Global Top 100 website for several years with significantly less failures than any of the SAS drives they replaced.
We installed roughly 500 Intel SSDs across several different workloads, databases, webservers, etc.. In the last two years, we had 1-2 failures. For the record, when SSDs fail usually they just go readonly. When spinning rust fails you usually lose all your data. Statistically speaking, a 24-drive SAS array is going to have more frequent failures than a 4-drive SSD array, a
Re: (Score:2)
> I've used them at a Global Top 100 website
Doesn't mean squat really. That doesn't really tell us anything about the mix of IO operations you're doing or how that compares to what the other guy is doing.
"website"? Big deal...
Re:Are you kidding? (Score:5, Informative)
I dislike SSDs because when they fail they do it catastrophically.
Huh? The typical failure mode for an SSD that hits its write limit is actually to simply become read only. Compared to an HDD's likely "all your data is gone", I'd hardly call that catastrophic.
Re: (Score:2, Interesting)
Huh? The typical failure mode for an SSD that hits its write limit is actually to simply become read only.
In my experiece, the typical failure mode for an SSD is not to reach its write limit, but to destroy all your data through a firmware bug. The typical failure mode for an HDD is an increasing number of bad blocks, which allows you to recover most of your data before it dies.
Re: (Score:2)
Re: (Score:3)
This is not a failure of SSDs; it is a failure of the vendor you bought your SSDs from.
Yes. Like every SSD vendor on the planet.
While it's purely anecdotal, I only know one person who's ever worn out an SSD. I know plenty who've lost all their data when the SSD failed due to buggy firmware; most commonly when there's a sudden loss of power.
Re: (Score:2)
I don't think you have much experience in this matter.
Re: (Score:2)
I've never seen an SSD fail in that manner. When you get cells that no longer write, the controller still tries to write to them and the write will partially work, and you end up with massive data corruption. The wear leveling mechanisms, which often only look at how many writes a cell has had and not whether or not the cell is still writable can also make a fantastic mess of the data on the drive, even while you've got it mounted read-only trying to do data recovery..
Though the most common failure is eit
Pushing out the Cache drive (Score:2)
and the mac pro will be stuck at 1TB max 256 min (Score:2)
and the mac pro will be stuck at 1TB max 256 min for a year or more with the same price for that time as well.
Another predictable ./ perspective... (Score:2)
As Dr FrankNfurter says in RHPS [imdb.com] "I didn't build him for YOU!!!" It's amusing whenever new datacenter/server technology gets posted on /. that half the posts evaluate the proposed product in terms of how affordable/practical/useful it would be to them in their little client desktop or notebook. All of these Intel drives are intended for server (or at least technical workstation ) use, so they need to be evaluated by ROI they give a business doing high-throughput work. If you think they have great stats b
Pathetic write endurance. (Score:4, Informative)
P3500 = 374TB for 2TB model = 2 days of continuous writing and drive dies = mlc
P3700 = 50 days of continuous writing = slc
while old Samsung 830 routinely did >1PB with 256GB model.
No, you wont write 20GB per day, those are not home use drives, they go into servers and get killed by bcache.
Before increasing capacity, what about encryption? (Score:2)
I've upgraded a number of customer machines from HDDs to SSDs and the performance boost is profound, no doubt, and Intel is one of the best performers.
But what's kept me from upgrading my own machine is: encryption support.
The use of hardware computed compression in Intel and other Sandforce SSDs is reportedly at odds with software-based OTF (on the fly) encryption options like TrueCrypt because encrypted data is incompressible, so such benefits are lost. It will probably still be faster than an HDD, but n
Re: (Score:2)
I also upgraded from an HDD to a non-encrypted SSD in one of my home computers and I would say the performance increase was about the same.
Re: (Score:2)
What you're looking for is called the eDrive standard. It lets the OS interface with the SSD in such a way as to allow Bitlocker and other whole drive encryption methods while using the SSD controller to do the encryption.
Re: (Score:2)
Many SSDs don't rely on compression to store their data. I've never bought one that did, and as far as I am aware, the problems you are describing were solved a long time ago. The laptops at my office (which are SSDs) are all encrypted currently using sophos full disk encryption, and we are soon moving to bitlocker for it instead (not sure exactly why other than sophos in general just sucks).
5x price differential at any time (Score:2)
In the good old days (Score:2)
Re: (Score:2)
Why did you buy anything bigger than 640k?
Re: (Score:2)
I notice that flash is currently goign for about 50 cents a GB and disk about 10 cents.
The $0.50/GB for flash is for the very cheapest SSDs available, and only when they're on a good sale. More likely is $0.75/GB on the low and, and it goes up from there.
As for hard drive prices, the lowest is a good deal cheaper -- you can find 3 TB external drives for around $100 now if you wait for a sale, so that's $0.03/GB. Of course, the prices go up from there, and enterprise level drives are a whole lot more.
(One thing I don't understand, is now external drives are cheaper than internal drives. The
Re: (Score:2)
The $0.50/GB for flash is for the very cheapest SSDs available, and only when they're on a good sale. More likely is $0.75/GB on the low and, and it goes up from there.
The Samsung 840 EVO is regularly available at $0.55-$0.60/GB for the 1TB model. And while it uses TLC (with a SLC-based cache to improve endurance), it isn't generally considered a "low end" product - 840 EVO got quite good reviews from various hardware sites.
Re: (Score:2)
Hmmm.. Most of the reviews I saw of the 840 EVO said they kind of blew and to buy the 840 PRO instead which was a much better product.
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
bcache, dm-cache in our hot spare servers (Score:3)
> who re-writes and entire HD 2x per day?
We would. Actually at that rate you'd expect it to die within 3-4 years. Drives dying is bad, so you need to replace them BEFORE they are likely to wear out. So figure you can write no more than 1/2 of the capacity per day.
For our hot spare server offering, we use raid arrays of 14 3TB drives, yielding 36 TB.
We'd like to use bcache or dm-cache. With a 500 GB SSD, we could write 250 GB / day - less than 1% of the array's capacity.
writes no problem for HOME use. Months for servers (Score:2)
For typical home use, the write limit will allow five years or more of use. For other use cases, it's a deal breaker. Our servers are an example.
We offer a value priced combination hot spare server and backup solution. We'd like to use SSDs with bcache or something similar. We don't because we'd hit the write limit in three to six months. Write limits need to be 20 times higher before SSDs will work in our application.
Re: (Score:3)
Maybe your application, but not for others. SSD's fit well into niche areas in the server market, the last server install I did with SSD's was 5 years ago, they're still going strong. On the otherhand, I've had entire SCSI arrays fail in a year and a half. At least hotswapping makes it less of a pain all the way around.
don't forget to check those old SSDs (Score:2)
> Maybe your application, but not for others
Absolutely. As I mentioned, typical home use is one application where a quality SSD should be fine. I have a DNS server with several MBs of writes per day, so SSD would be fine for that (virtual) machine.
For the hot spares, no way SSD would work.
> the last server install I did with SSD's was 5 years ago, they're still going strong.
I hope you're checking those drives periodically and have some good monitoring. SSDs from five years ago had an expected lifes
Re: (Score:2)
I hope you're checking those drives periodically and have some good monitoring. SSDs from five years ago had an expected lifespan of what, about five years in your application?
8 years give or take 6 months. Since they're in a raid array, redundancy does help.
Re: (Score:3)
High end flash add in cards (like FusionIO) typically specify the "write limit" (I think this is what you mean) in Petabytes Written (PBW). So a flash card might be guaranteed to give you a minimum of 5 petabytes written over the life of the card.
Re: (Score:2)
How many write limits does this have?
The larger the drive, the less an issue write limits become because the writes get spread over an increasingly large area.
Granted in certain specific niche server use cases it may still be a concern, but write limits is a rapidly disappearing problem for nearly all of us.
Re: (Score:3, Funny)
English much?
Oh, bitter irony, I stab at thee!
Re: (Score:3)
Of course, that spread across a 2TB drive, means needing to write 2PB of data before the drive dies, so at even a fairly high usage home user's 10GB per day, that'll be 550 years before they have problems.
Why do people still think SSD write limits are an issue?
Re: (Score:2)
mostly because they ARE still low enough to be an issue in some applications
Re: (Score:2)
Database servers runnig 24/7? How many people need to worry about one of those?
Besides, if one SSD can replace 20 hard drives (where speed, not capacity is required), it might still be cheaper to use SSDs even if they have to be retired a bit earlier than HDDs were.
Re: (Score:2)
any internet facing service that does huge amounts of logging or data taking, we have many such where I work
Re: (Score:2)
For that matter logging is purely sequential so a hard drive should work very well in the first place.
Re: (Score:2)
no, logs are compressed and rotated
Re: (Score:2)
But again, if you get some major performance gains and have to replace the drive in two years (as opposed to say four), why is this a big issue. One doesn't expect new tech to be perfect, just better.
Re: (Score:2)
but we don't have "a drive", we have arrays with hundreds of drives. replacement even with magnetic is often
Re: (Score:2)
I've been using my SSD (yes, an OCZ Agility 2) since March 2011 (almost 3 years now), at least half of that under XP (no TRIM support), and according to what's been written on the drive, It won't matter for another 4-5 years even if sometimes I'm pounding the hell out of it. (it's 25nm if I'm not mistaking). most conventionnal drives will give me bad sectors way before that.
Re: (Score:2)
Not really surprised at that, I've got a first generation SSD(OCZ Vertex), and going by everything including a rough estimate I've still got 6 years left on it. And you're right on conventional drives giving bad sectors, my 1TB and 2TB drives were 3 months old and started throwing bad sectors. The 1TB will likely have to be replaced when I get back home, since it was causing controller resets as well.
Re: (Score:2)
you are assuming without knowing.
Re: (Score:2)
They were more of an issue with small drives and swapping operating systems. Those problems were overcome pretty early on with as you point out intelligent drive controllers that don't write data in the same place. Since the latency almost non-existent in SSD, there's suddenly almost no penalty for writes across the drive. With 2TB of space to work with it will take a very, very long time. I imagine drives of this size would be used for data storage more than anything, so it depends on usage. The average ho
Re: (Score:2)
Is that how the math works?
If my 2T drive is 80% full (I'd have a smaller one otherwise), that leaves 400G. Does the wear levelling know to move some of the "static" 80% into these 400G when I approach 1000 writes, giving me a new 999 writes into a fresh 400G, or is my drive just going to croak after 400Gx1000 writes, while 80% of it is only written once?
No guessing which would be the right thing to do, I'd like some who knows for sure, to tell me how smart the write levelling really is about moving never-c
Re: (Score:2)
Most SSD drives have sectors you can't see. Even a 80% full drive likely has another 8-10% of underallocated sectors that you can't touch, specifically set aside for remapping purposes. I thought the new controllers would even do as you suggest and remap some of the static data to the heavily modified sectors in the background for better wear leveling as well.
Re: (Score:2)
Why do people still think SSD write limits are an issue?
Because Slashdot is full of Luddites. Anything new is bashed. In "typical" use, the user will die of old age before the drive. Or the complainer will state "if you take a drive with known write limitations, and put it as a cache drive in a very very busy drive, writing at maximum speed 24/7, you'll reach write limits before MTBF times. So, don't use it for that. If you have cache that write intensive, drop 128 GB or RAM on a battery-backed card for performance. But no, rather than selecting the appropr
Re: (Score:2)
Yes, but you might also want to include a 128GB SSD, so that on power drop the system would start to write the data from DRAM to the SSD before it shuts down completely.
Re: (Score:2)
Re:Write limits (Score:5, Informative)
Re: (Score:2)
I wouldn't firmly say SSDs aren't less reliable per se, they just have a limited but measurable lifespan. Being measurable is very important when purchasing with reliability in mind, as drive death can be predicted.
When SSDs were in their infancy they were plagued by problems, but we are passed those times. Nothing is in concreate however, I am on the fence on this subject as SSDs haven't been around long enough to make any solid judgements, but I'm leaning towards the fact that SSDs will out perform spinni
Re: (Score:3)
Not true, failure rates for SSDs are an order of magnitude lower than those for HDDs (around 0.5% for SSDs vs 5% for HDDs per year)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
http://hardware.slashdot.org/story/13/09/12/2228217/ssd-annual-failure-rates-around-15-hdds-about-5 [slashdot.org]
Oh wait, you wanted a citation that said the opposite. My bad.
Re: (Score:2)
I probably write at least 100GB per day on my hard drives if I factor in OS and application pagefile/cache.
You probably don't.
Re: (Score:2)
I do, it is quite easy to do. But I'm not an average user, nor do I pretend to be.
I also use SSDs, and know not to send the high throughput writes to it. I send them to my 12-drive raid array that has even better throughput than my Raid-0 SSD array. I do keep my pagefile/swapfile on my SSD because latency is much better on it. I also monitor the SSD write levels, and last I checked, I'm good for another 8-9 years according to SMART, and probably closer to 12-13.
Re: (Score:2)
I do. I have 64GB of ram in my machine, and still carry a page file for when I exceed that.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
The Samsung 840 Series SSD has a total of 1000 P/E Cycles.
The SMART Wear Leveling Count value has two values; the normalized value (out of 100) and the raw value (out of 1000).
So, if the raw value is 30 it means that the cells have been erased 30 times out of the total 1000 times that the SSD can endure.
The normalized value is calculated like so
FLOOR.PRECISE((1000 - X) / 10)
With X being the raw value.
So, it would
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
Re: (Score:3)
Everything degrades. Even the paint on the walls of your home degrades. But it's not something you have to take into account.
So: for all practical purposes, the magnetic medium of a mechanical hard drive platter does not degrade at all.
I'll get me coat... (Score:3)
So: for all practical purposes, the magnetic medium of a mechanical hard drive platter does not degrade at all.
Basically, even though it degrades slightly, I can pretend that it doesn't?
This would mean that, ohhh yes, I'm degrade pretender (ooh-ooh).
Also, does it matter how many Platters the drive has?
Re: (Score:2)
Re: (Score:2)
If your children will die of old age before the write limit is hit, why do you make it such an issue? So lonely you come to Slashdot to pick fights so people will talk with you?
Whoooooosh! [youtube.com]
Re: (Score:2)
In active use, my experience has been that platters usually degrade because the heads get parked on a park ramp one too many times, snap off the head arm, and then get dragged across the surface of the disk.... Want your disks to last for decades? Park the heads infrequently or not at all.
But even in the absence of actual use, eventually, even on a hard disk, the bits are likely to get corrupted by random stray cosmic rays and possibly superparamagnetism. Of course, any real data loss is likely to take d
Re: (Score:2)
Yes, other parts will degrade far faster then the magnetic media so the magnetic media essentially does not degrade.
But even in the absence of actual use, eventually, even on a hard disk, the bits are likely to get corrupted by random stray cosmic rays and possibly superparamagnetism. Of course, any real data loss is likely to take decades, if not centuries.
The bits on a hard disk do get corrupted with time - I believe it is referred to as bit-rot. You would be lucky to have a hard drive last for 20 years even if unplugged. Just like those old floppies go bad - so do drives. But bit-rot can be prevented. ZFS includes an option where it will re-write data every so often. In addition, it automatically detects and corrects these errors - so l
Re: (Score:2)
Which is it? Do they or don't they degrade?
They do not degrade enough to matter during the normal lifetime of a drive, regardless of the write frequency or patterns. The bearings may wear out, but the iron oxide will not.
Re: (Score:2)
Everyone's desktops and laptops. Most people don't have attached storage, that ruins the point of a laptop. And gamers would laugh at the idea of putting their games on network storage. The only problem is price, which will come down as time goes on.
Re: (Score:2)
Why would you put a SSD in a NAS? The type of data put on home NAS' typically do not need very high transfer speeds.
Re: (Score:2)
Re: (Score:3)
I'm not quite sure exactly where these would be used, other than in niche systems that need large amounts of local, superfast storage.
Wireless access to a NAS? You've got it backwards _you_ are the 'niche'. Every NAS I touch is connected via 10GB, and in some cases bonded 10GB lines that aggregate to 40-50GBS. We don't want these for playing warcraft at home - we want them for work.
Example: I have NAS's as storage targets for backup daemons that receive 40-50 simultaneous backup streams from clients. Each stream can average 120-150 mBytes/sec on it's own; usually the network link is the bottleneck. Even if we pack several dozen 15k SAS dr
Re: (Score:2)
Why would schedule 40-50 backups to occur at the same time?
Re: (Score:2)
Only way to get it done within the backup window - 40-50 clients at a time, out of ~750. The idea is to get as close to 100% capacity for network/IO as possible, without creating a backlog. Ideally, with an unlimited budget we could just double or triple our backup destinations, but that's not an option right now.
Re: (Score:2)
Agreed. I'm still waiting for 500gb+ drives to drop below $0.50/gb, without a mail-in rebate. I only use laptops, so adding additional drives isn't an option.
Re: (Score:2)
Plenty of laptops out there that support 2 drives.
Re: (Score:2)
Getting close. Once they hit $0.49, or lower, then I'll bite.
Re: (Score:2)
Re: (Score:2)
I swear I heard Doc Brown's voice when I read that...
Re: (Score:2)
19.4 Gb/s. Sure it's fast, but it's not absurdly fast. It's less than four or less times the maximum you get out of high-end consumer drives now, and those are bottlenecked by SATA 6Gb/s.
Comment removed (Score:5, Informative)
Re: (Score:3)
SSDs aren't for mass storage. You're better off with hard drives or tape for that.
SSDs are for blindingly fast performance first, everything else second. Install your OS and applications on a SSD. Keep your movie and music collection on a hard drive.
Re: (Score:2)
Re: (Score:2)
SSDs are also excellent for mobile devices because they don't suffer catastrophic failures if they are moved while operating.
Re: (Score:2)
Even when the costs do come down, SSDs are likely to be priced at 2x HDD prices, just because they can.
The market prices superior products at higher prices, just because it can.
You might not care, and that's fine, but for those of us who do, we went all SSD awhile ago and love it.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Give me an SSD within the same power-of-ten size as a hard drive for the same cost and we'll talk.
Seriously. Give me a 1Tb SSD for the cost of the cheapest XTb hard drive and I'll buy it. But if hard drives get to 10Tb in that time, guess what happens? You then have to give me a 10Tb drive for the same price.
Keep dreaming, buddy-boy. We won't "give you" anything like that for a very long time. The main point in moving to SSDs is R/W performance. Just put an SSD (any size) as your system drive and feel the mindblowing speed difference. In a modern computer, the mechanical hard disk drive is a huge bottleneck: processes spend a lot of time spinning thumbs in "I/O wait" state.
Re: (Score:3)
My rule for SSD hasn't changed since their invention.
My rule for storage is I don't trust anyone who can't get the suffixes for bits and bytes straight.
Re: (Score:2)
My rule for SSD: I will never again have a non-SSD system disk, due to the significant increase in speed, and also noise reduction (and on laptop increased battery life), you get. The multi-TB storage is on a NAS.
Damn skippy!
Re: (Score:2)
> You're probably not the target customer.
Well DUH. Delcaring that you are not MADE OF MONEY is a very legitimate sort of thing to say in this kind of discussion. Also it doesn't just apply to "mere individuals". Many if not MOST corporations probably feel the same way.
SSD solutions that are far too expensive to be relevant for most individuals or even corporations are nothing new.
Re: (Score:2)
SSD solutions that are far too expensive to be relevant for most individuals or even corporations are nothing new.
You can get an mSATA or M2 small ~32-64GB SSD drive (which many motherboards have direct attach slots for now) for about $60. If you use that as your boot / OS system / critical-app drive and get a slow multi-TB spindle HDD drive for your bulk load-and-save storage you'll get huge improvement in your startup/shutdown times and general system operation while still having cheap mass media. Is that far too expensive?
Re: (Score:2)
Re: (Score:2)
>running a large table through a scan will reliably blow your cache. for most use cases i/o performance
remains limiting
Depends on how big your database is. We cache our entire database in RAM because it's only a few gigabytes; reads are as fast as they can be, and, in our case, far more common that writes.