Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Intel

Intel SSD Roadmap Points To 2TB Drives Arriving In 2014 183

MojoKid writes "A leaked Intel roadmap for solid state storage technology suggests the company is pushing ahead with its plans to introduce new high-end drives based on cutting-edge NAND flash. It's significant for Intel to be adopting 20nm NAND in its highest-end data center products, because of the challenges smaller NAND nodes present in terms of data retention and reliability. Intel introduced 20nm NAND lower in the product stack over a year ago, but apparently has waited till now to bring 20nm to the highest end. Reportedly, next year, Intel will debut three new drive families — the SSD Pro 2500 Series (codenamed Temple Star), the DC P3500 Series (Pleasantdale) and the DC P3700 Series (Fultondale). The Temple Star family uses the M.2 and M.25 form factors, which are meant to replace the older mSATA form factor for ultrabooks and tablets. The M.2 standard allows more space on PCBs for actual NAND storage and can interface with PCIe, SATA, and USB 3.0-attached storage in the same design. The new high-end enterprise drives, meanwhile, will hit 2TB (up from 800GB), ship in 2.5" and add-in card form factors, and offer vastly improved performance. The current DC S3700 series offers 500MBps writes and 460MBps reads. The DC P3700 will increase this to 2800MBps read and 1700MBps writes. The primary difference between the DC P3500 and DC P3700 families appears to be that the P3700 family will use Intel's High Endurance Technology (HET) MLC, while the DC P3500 family sticks with traditional MLC."
This discussion has been archived. No new comments can be posted.

Intel SSD Roadmap Points To 2TB Drives Arriving In 2014

Comments Filter:
  • by gweihir ( 88907 ) on Saturday December 07, 2013 @12:38PM (#45626977)

    I tried to find out for a large customer how long the current enterprise SSDs live, but Intel declined to comment. Through the grapevine I have heard of people doing complete replacements every 6 months to prevent failures in production environments, after they learned the hard way that these are not as reliable or long-lived as many people think. Especially small-write endurance seems to be pretty bad.

  • by Anonymous Coward on Saturday December 07, 2013 @01:31PM (#45627349)

    I've used them at a Global Top 100 website for several years with significantly less failures than any of the SAS drives they replaced.

    We installed roughly 500 Intel SSDs across several different workloads, databases, webservers, etc.. In the last two years, we had 1-2 failures. For the record, when SSDs fail usually they just go readonly. When spinning rust fails you usually lose all your data. Statistically speaking, a 24-drive SAS array is going to have more frequent failures than a 4-drive SSD array, and the SSD array is going to smoke it.

    The game has changed and a lot of people need to catch up. Most SAN technology is obsolete. RAID cards are obsolete (not fast enough). RAID 5 is now obsolete (rebuild times take too long with modern drives). The only reason to use hard disks is for cheap data archival purposes.

    If you're not using SSDs in your database or high-IO workloads in 2013 you're wasting your time. They're no less reliable than any other type of storage and that argument has been debunked a thousand times over.

  • Re:Are you kidding? (Score:2, Interesting)

    by 0123456 ( 636235 ) on Saturday December 07, 2013 @06:34PM (#45629193)

    Huh? The typical failure mode for an SSD that hits its write limit is actually to simply become read only.

    In my experiece, the typical failure mode for an SSD is not to reach its write limit, but to destroy all your data through a firmware bug. The typical failure mode for an HDD is an increasing number of bad blocks, which allows you to recover most of your data before it dies.

If all else fails, lower your standards.

Working...