Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel Launches SSD DC P3608 NVMe Solid State Drive With 5GB/Sec Performance 65

MojoKid writes: Intel just launched a new NVMe-based solid state drive today dubbed the SSD DC P3608. As the DC in the product name suggests, this drive is designed for the data center and enterprise markets, where large capacities, maximum uptime, and top-end performance are paramount. The Intel SSD DC P3608 is somewhat different than the recent consumer-targeted NVMe PCI Express SSD 750 series, however. This drive essentially packs a pair of NVMe-based SSDs onto a single card, built for high endurance and high performance. There are currently three drives slated for the Intel SSD DC P3608 series, a 1.6TB model, a 3.2TB model, and a monstrous 4TB model. All of the drives feature dual Intel NVMe controllers paired to Intel 20nm MLC HET (High Endurance Technology) NAND flash memory. The 1.6TB drive's specifications list max read 4K IOPS in the 850K range, with sequential reads and writes of 5GB/s and 3GB/s respectively. In the benchmarks, the new SSD DC P3608 offers up just that level of performance as well and is one of the fastest SSDs on the market to date.
This discussion has been archived. No new comments can be posted.

Intel Launches SSD DC P3608 NVMe Solid State Drive With 5GB/Sec Performance

Comments Filter:
  • monstrous 4TB

    That's what they said about 4K, 4MB, and 4GB. Now 4PB in a single drive at 4GB pricing would be monstrous.

    • by afidel ( 530433 )

      Yeah, it's not even that large by current standards, Samsung announced a 16TB 2.5" SSD last month, though at much lower performance numbers than these Intel units. It will come down to cost and workload profile as to which is a better fit, though for NVMe you really want the highest IOPS per slot that you can get because if all you need is capacity then SAS12 gives you the ability to attach MANY more drives per controller than NVMe which is limited to a handful of drives.

      • by Anonymous Coward

        SSD size really isn't any special feat. (Chips are tiny little fuckers and you can cram lots of them in to something the volume of a 2.5" drive.

        You just stack more chips together and things more or less scale in a linear manner with little-to-no diminishing returns. Fuck, if you do it properly your speed goes /up/ because you can address multiple dies at a time. Your only limit is the speed at which you can funnel data in and out of the thing.

        Hard drives, on the other hand, have very real mechanical

      • by mlts ( 1038732 )

        I remember one company that did a dog/pony show for an all-SSD SAN product that did this, although I forgot the name of the company.

        Their SAN had fast Intel SSD for the landing zone where the data had one pass at being deduplicated. Then, there was a background task that deduplicated the data a lot better (but took more CPU power) and moved the data to slower, but higher capacity solid-state drives. Both the faster Intel SSD and the slower (but bigger in capacity) Samsungs would definitely have a place in

    • hard drive capacity grows by 100 X every ten years. Therefore, we'll be at 4 PB drives in about 12 years since we're at 8 TB now

    • by rbrander ( 73222 )

      Ahem. The GP was actually being, and I'm using the word correctly, literal.

      https://en.wikipedia.org/wiki/... [wikipedia.org]

      Tera is derived from Greek word teras, meaning “monster”. ...So TB is in fact monstrous...though PB and up are not.

  • Do they feature an NSA-enriched firmware?
  • by swb ( 14022 ) on Wednesday September 23, 2015 @05:09PM (#50586011)

    That was supposed to be consumer cheap and datacenter fast and durable?

    I don't know what market this thing is for, maybe the host caching or db stuff.

    • by Anonymous Coward

      3D XPoint is at least a year away. There are still plenty of people who need SSD now for whatever project they are working on.

      Question is, can they afford to buy cheap expendable SSD drives now to boost performance and then hopefully last over until they can spend a chunk of their budget money on risky 1st gen 3D XPoint tech.

      • by swb ( 14022 )

        Question is, can they afford to buy cheap expendable SSD drives now to boost performance

        After reading about the SSD write endurance test:

        http://techreport.com/review/2... [techreport.com]

        I'd be curious just to see how long they would last in real world (neither extremely brutal nor extremely mild) SAN/RAID applications. Like a shelf of 24 in a RAID-6 config with a couple of hot spares.

        What would the actual failure rate be? Would the relatively low cost of say a Samsung 850 Pro be worth a higher failure rate when you cons

    • Yes they did. They announced it will be ready sometime towards the end of next year.

      Just because you have something new around the corner doesn't mean everyone else isn't already working on something for right now.

  • ... on high capacity SSD's being over what you'd pay for an equivalent amount of storage on a hard drive is the single biggest issue with flash storage, in general.

    Until that issue is settled, SSD's can really only replace the floppy, IMO... but not the hard drive.

    • ... on high capacity SSD's being over what you'd pay for an equivalent amount of storage on a hard drive is the single biggest issue with flash storage, in general.

      There's a bit of an exponential curve where high capacity SSDs get much more expensive. Smaller SSDs are dirt cheap.

      Until that issue is settled, SSD's can really only replace the floppy, IMO... but not the hard drive.

      That battle is over. The SSD has ALREADY replaced the hard drive. We haven't bought a new computer with a spinning platter in our office in probably 3 years. We're a small business without huge data needs and our one server, built circa 2011, currently has a 4tb ZFS pool, and about 1/2 of that is snapshots and workstation backups (for which speed doesn't particularly matter). Next time I buil

      • by mark-t ( 151149 )

        It's only over for those for whom economics is superfluous, and don't particularly care how they spend their money. That same $400 that bought you a TB Sata SSD could also have easily afforded more than 8TB of hard drive space.

        Since you asked "why not", of course.

        • Did you miss where I said our current server holds 4tb (we have about 1tb free)? Why would I even want 8tb?

          • by mark-t ( 151149 )
            It apparently escaped your attention that I was referring to how much storage you could have otherwise purchased for the same amount as you spent on your ssd. Hey, it's your money, and you can spend it however you want, but don't assume that everyone is so indifferent about how far their dollars will actually go. For a quarter of what you spent on the ssd, one could buy 2tb of platter space, which is an entirely typical desktop configuration in 2015.
            • Odd comment then, that I commented on how much much more expensive large SSDs are. Typical desktop-sized SSDs are dirt cheap. Like I said, we're a small business. We run typical business apps in addition to a few people doing graphical and multimedia production work. Out of about 25 desktops, I don't think a single user is using more than 250gb of local storage. We have absolutely no need for 2tb of space--it would just be totally wasted. a 200gb SSD makes for a _far_ better desktop experience and is well u

              • by mark-t ( 151149 )
                If 200gb is all you need, sure.... My mac, which my (grown) kids use quite a bit for movie editing for their film projects, has 2TB of storage and they use well over 50% of the hard drive.
            • It is not so long ago that you could share a few hundred GB of HDD space among a number of users (home directories, group directories) now it gets economical to share a similar amount or more on SSD.
              And no need for silly RAID controller cards anymore..

              If the workstations run linux or BSD, why not forgo local storage : mount the root partitions from the network (they're mosly read-only, low level of activity anyway)
              Would be fun if they're on dedupe'd SSD storage, gigabit networking to the clients and 10Gbe "

    • What issue? That they cost more per GB? Most people are far more happy with their iops/$ ratio, and you can keep a few local multi TB spinning platters around if cloud storage is not your thing.
      • by mark-t ( 151149 )
        It's not just simply that it costs more per gigabyte, as much as it is that the difference in price at the scale at which storage is actually employed up spelling the difference between something that is economically viable and something that is not. You can buy a 2TB HD for about $75, which is cheap. You can also spend roughly 10 times that for the equivalent storage in flash, which isn't so cheap anymore.
        • Yes, but you've made the classic error of starting a land war in...no, wait, the classic error of equating capacity with value, and initial cost with lifetime cost.

          If you're application is operation-bound, you have to stripe or partition hundreds of spinning disks to get the same performance as an SSD. It's like owning a UPS van vs a fleet of motorcycles to deliver packages. The truck is by far the most efficient for carrying lots of packages, but if you have to deliver 1000 boxes an hour, you'd much rather

    • by Anonymous Coward

      >... on high capacity SSD's being over what you'd pay for an equivalent amount of storage on a hard drive is the single biggest issue with flash storage, in general.
      How many times does some joker like you have to make this point every SSD article?
      You're bloody determined to try to convince people that they are making poor economic choices by buying solid state drives, as if you know and understand their needs.
      Paying the same amount of money to get 10x the storage space with HDDs would do us absolutely no

      • by KGIII ( 973947 )

        The lowest amount of RAM I have, in any box that gets used on a regular basis, is 16 GB. The rest are around 32 and my regular desktop (which I'm sort of using at the moment) has 64. I'm not really using the desktop, I guess, but am connected via VNC and using the desktop remotely (I'm on hiatus/wanderlust for an undetermined duration and want to maintain my own encrypted connection and I like this route).

        I guess my point is that, yeah, as I save fewer and fewer things (I often don't even use the OS that is

      • by mark-t ( 151149 )
        It's not about getting 10x the storage space for the same money as much as it is spending 10x the money for the same amount of storage space (which can spell the difference between economically viable and not) when your storage needs are much more demanding than just a couple of hundred gigabytes.
  • by nuckfuts ( 690967 ) on Wednesday September 23, 2015 @06:40PM (#50586473)

    People often comment that only a datacentre or intensive database operation needs this kind of speed, but virtualization another application where IOPS are important.

    I recently put together a small ESXi server with a couple of Intel 750 Series [intel.com] PCIe SSD's for VM storage, rated at 460,000 random read, 290,000 random write (4k) IOPS. Even with multiple VM's running, the responsiveness is like nothing I've experienced before.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      rated at 460,000 random read, 290,000 random write (4k) IOPS.

      No, they're not. They're rated at up to 460,000 random read, up to 290,000 random write 4k IOPS.
      Which is a major difference between consumer and datacenter SSDs, consumer drive specs list burst performance, DC list sustained.
      And that difference is quite significant [anandtech.com].

    • by Wolfrider ( 856 )

      > I recently put together a small ESXi server with a couple of Intel 750 Series [intel.com] PCIe SSD's for VM storage

      --Could you please expand on your server specs? I am interested in putting together a small ESXi server for personal use and would like to keep the budget under $800 or so if possible... TIA

  • I always wondered why they never disk this with HDDs and just make them 5 1/4in in size to use up all those empty bays in the front of my computer even if you had to connect to sata cables to it. Or back in the day two ide cables.

"The vast majority of successful major crimes against property are perpetrated by individuals abusing positions of trust." -- Lawrence Dalzell

Working...