Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel Offers More Insight On Its 3D Memory (itworld.com) 115

itwbennett writes: When Intel and Micron Technology first announced the 3D XPoint memory in July, they promised about 1,000 times the performance of NAND flash, 1,000 times the endurance of NAND flash, and about 10 times the density of DRAM. At OpenWorld last week, Intel CEO Brian Krzanich disclosed a little more information on the new memory, which Intel will sell under the Optane brand, and did a demo on a pair of matching servers running two Oracle benchmarks. One server had Intel's P3700 NAND PCI Express SSD, which is no slouch of a drive. It can perform up to 250,000 IOPS per second. The other was a prototype Optane SSD. The Optane SSD outperformed the P3700 by 4.4 times in IOPS with 6.4 times less latency.
This discussion has been archived. No new comments can be posted.

Intel Offers More Insight On Its 3D Memory

Comments Filter:
  • So which is it? (Score:5, Insightful)

    by fnj ( 64210 ) on Wednesday November 04, 2015 @02:04PM (#50865245)

    1,000 times the performance, or 6 times the performance? Somebody needs to get the story right with the hyperbole.

    • Re:So which is it? (Score:5, Insightful)

      by pushing-robot ( 1037830 ) on Wednesday November 04, 2015 @02:11PM (#50865313)

      They claimed the technology had the potential to hit 1000 times faster than current flash memory... they didn't specify when or what flash they were comparing to.

      In any case, this is an early prototype spanking the top of the line current technology. That's impressive in my book.

      • by dafradu ( 868234 )
        And no industry will deliver a product 1000x times faster/better overnight, not as long a competitor can't do the same. They will sell you a thing 2 or 3 times faster every few years, maximizing profit.
      • Re:So which is it? (Score:5, Interesting)

        by nojayuk ( 567177 ) on Wednesday November 04, 2015 @02:44PM (#50865559)

        There's a whole raft of other things to consider before this tech changes the IT world -- how much does it cost, how many separate fabs can produce it so there's no single-point-of-failure that could constrain supply, how much redesign of existing chipsets is required to integrate it into current server/workstation/mobile phone designs, what's the failure rate in service, power dissipation and cooling requirements etc.

        Saying that the demo suggests it can be implemented into existing platforms with little difficulty. Of course as Napoleon once said, "There are lies, damned lies and rigged demos." Time will tell.

        • Re:So which is it? (Score:5, Informative)

          by swb ( 14022 ) on Wednesday November 04, 2015 @03:31PM (#50865937)

          If this technology can be adapted to fit into SAS-compatible packaging at MLC/3D NAND pricing this will rock the enterprise storage world for sure.

          Entire brands/products in enterprise storage are built around features like caching/tiering that charge you $30k for a little flash and way more than they should for spinning rust under the promise that they'll deliver flash performance for all your workloads, most of the time.

          Doing so requires beefy controllers to run elaborate tiering schemes, and along with the sky-high prices for media makes them extremely expensive and extremely profitable.

          If (and this is a big if) you can get SLC durability at MLC pricing and simultaneously cut the controller cost (need less compute because you're not bothering with tiering, far less software complexity), suddenly you could have someone selling entry level 24 drive shelves with millions of IOPS and sustained transfers that will melt SAS-12 cables.

          Basically it will make sense to quit using rust at all without paying nosebleed pricing at pretty much any scale.

          • Re:So which is it? (Score:4, Insightful)

            by nojayuk ( 567177 ) on Wednesday November 04, 2015 @03:50PM (#50866111)

            If this tech makes it into the marketplace at reasonable prices it's not going to be hanging off SAS-12 cables or any other serial links at that rate, it will be more tightly integrated with the CPU bus to deliver on the R/W and access speed improvements. Even PCI is a possible bottleneck if this 3D flash can deliver what Intel are claiming for it. Comparing its performance to DRAM is a "tell" and shows what they're thinking; this may be the fabled "non-volatile RAM" solution that's been the Holy Grail researchers have been trying to develop pretty much ever since RAM was invented. (Yes, I know there are battery-backed-up RAM solutions that claim to be non-volatile but they're only non-volatile until the battery power runs out).

            • by swb ( 14022 )

              If it's cheap enough, it'd still be useful as a hard disk replacement even if it's not the most optimal way to deploy it. Fixed storage isn't going away tomorrow even if this turns out to be the holy grail of NVRAM.

              I'm not sure it is, either, as its durability is compared to SSDs, not to DRAM.

              Even if it was a game changer, it'd be years before hardware and upstream architectures adapted to more optimal uses for it. And if it doesn't have DRAM durability, it's more likely to be used as permanent storage an

            • This is a quibble, but non-volatile RAM has only been the Holy Grail since about 1970. Prior to that, magnetic core memory was the standard RAM technology and is non-volatile. (To quibble the quibble, for a short period of time Williams tubes were the state-of-the-art (indeed, only) RAM, and they are volatile. Alan Turing played with Williams tubes.)

            • by r0kk3rz ( 825106 )

              Comparing its performance to DRAM is a "tell" and shows what they're thinking; this may be the fabled "non-volatile RAM" solution that's been the Holy Grail researchers have been trying to develop pretty much ever since RAM was invented. (Yes, I know there are battery-backed-up RAM solutions that claim to be non-volatile but they're only non-volatile until the battery power runs out).

              From TFA

              The company will also come out with Optane DIMMs later this year for early testers, which will combine the performance of DRAM with the capacity and cost of flash. That means a two-socket server with Optane DIMMS will have a total of 6 TB of addressable memory, "virtually eliminating paging between memory and storage, taking performance truly to a whole new level.

              Seems like we're going to find out soon, 6TB of addressable non-volatile ram sounds like a game changer

              • by nojayuk ( 567177 )

                Seems like we're going to find out soon, 6TB of addressable non-volatile ram sounds like a game changer

                A server system really needs to be able to address hundreds or thousands of terabytes of storage, not just six. That's what I meant by the server system designers having to revamp the basic concepts of a computer with RAM separate from secondary storage (HDDs or SSDs on a separate bus) to one with a "flat" storage architecture. The OS will have to change too to take account of the blurring or total elimi

                • It really depends on what type of server we're talking about. Is it a front-end web server? Is it a middleware application server? Is it a database server for small to medium databases? Is it a big DB cluster? Is it a media or document storage system? Is it a hypervisor on a hardware node offering shards of its resources to VMs? These have different storage and processing needs.

                  In the short term, there are a few solutions for the OS and applications. Many applications will keep as much in memory as possible

                • A server system really needs to be able to address hundreds or thousands of terabytes of storage, not just six.

                  Only in very niche markets is that true. Most servers don't need anything near 6TB of storage, let alone 6TB of (D)RAM.

                  • by nojayuk ( 567177 )

                    Most servers don't need anything near 6TB of storage, let alone 6TB of (D)RAM.

                    Many servers do need access to that sort of storage (and a hundred times more) and it would help if those servers run the same OS on similar hardware as other servers with less demand do. The alternative is for the sort of species differentiation that hobbles High Performance Computing (HPC) because there are few standards and a lot of hand-written system code flying in close formation, different on each machine.

                    I expect, if an

                    • Most servers that need to access that amount of storage don't do so locally, they are doing so through things like SANs because maintaining that number of high speed low capacity drives isn't trivial and best consolidated across servers. Getting a 6TB pool would require at least 10 600GB 15k drives (the largest capacity 15k RPM drive that seagate currently makes), and that's not going into your 1x server blade, and typically isn't a great solution.

                      Network and remote access drives like SANs aren't going awa

                    • by swb ( 14022 )

                      I agree wholeheartedly that the SAN storage consolidation model isn't going away. It's logical and it's been so widely adopted with so many dollars and man-hours invested in it that it might never go away, regardless of storage device changes.

                      That being said, the "hyperconverged" software defined model of server nodes possessing some storage and clustering it into virtual SANs is gaining some traction. VMware has vSAN and Windows 2016 server will extend storage spaces to allow for this.

                      The challenge for t

          • For that to succeed you need more than 2 manufacturers (Intel & Micron) though.
            • by PRMan ( 959735 )
              Don't Samsung and HP already have competing technologies?
              • I don't think so. They will in a few years time and in the mean time they'll market tuned versions of their high end SSD's as being comparable.
                But I'm someone who believes they will deliver on some of their hype.
      • They claimed the technology had the potential to hit 1000 times faster than current flash memory... they didn't specify when or what flash they were comparing to.

        Just to be up-front this is a topic I'm very ignorant about, it was only casual curiosity that made me peek into the comment section. But does't Intel normally announce a new design with where the intend for it to land long before they announce their products that are only blips along their roadmap? For example: "With this our new Pentanium Matrix we'll reach 64 cores and 10 gigahertz."... and a few months later: "Now announcing our Titticaca processor with 16 cores at 3.09ghz"

        Eh I dunno, but now you hav

    • It kind of reminds me of the old "jet stream oven" infomercials.

      "It is microwave fast ... 2x faster than cooking it in a conventional oven"

    • I'm guessing both. (Score:5, Insightful)

      by DumbSwede ( 521261 ) <slashdotbin@hotmail.com> on Wednesday November 04, 2015 @02:29PM (#50865433) Homepage Journal

      Increasing Memory Speeds 1000x will not lead to a straight 1000x increase in operations. There are undoubtedly other bottles necks in processing. What for instance is the theoretical max throughput of the memory interface used (is it a modified SSD interface)? What CPU overhead is involved? Don't expect your computer to perform 1000x better across the board just because one component is 1000x faster.

      • Re: (Score:2, Funny)

        by Anonymous Coward

        is it a modified SSD interface?

        No. It'll show up using a modified DDR4 interface or an NVMe interface. You'd have to look at tech news sites (not Slashdot) to find that info.

    • How many times the cost, and the price?
    • by Kjella ( 173770 )

      Well you can deliver a car with 1000x the performance in horsepower, but it won't go 1000x faster. Since they've been intentionally vague about exactly what metric they based that claim on, it could be anything really. Beating a top of the line enterprise NVMe drive several times over is impressive at any rate. I look forward to seeing actual product.

    • by hey! ( 33014 )

      I note that they refer to a 1000x improvement in "performance", and a 6.4x improvement in "latency". Latency is one time-related performance metric; throughput might be the other be alluded to.

      Imagine two water hoses. One is ten feet long and one inch wide; the other is inches in diameter and a hundred feet long. Which can deliver water "faster"? Well, when you turn the spigot on water comes out of the ten foot first; but if you're filling up a swimming pool the hundred foot long hose is faster.

      • Neither, cause I only have a 5/8" spigot, so neither will hook up to it and I'll drag out my old hose from the garage.

    • by fnj ( 64210 )

      Darn good discussion, all.

  • Yeah, we *really* believe Intel's marketing statements; I mean, they've been 100% accurate in the past.

    Look, just shut up and start shipping product. The IT community will come up with their own performance figures.
    • by pr0nbot ( 313417 )

      eah, we *really* believe Intel's marketing statements; I mean, they've been 100% accurate in the past.

      Yeah - at the time they even had the nerve to claim they'd be 600% accurate.

  • by sexconker ( 1179573 ) on Wednesday November 04, 2015 @02:31PM (#50865451)

    "6.4 times less latency" means that if the latency of the baseline thing you are comparing against is X, then the latency of the new thing has a latency of 6.4 times X less than X, which is X minus 6.4 times X, which is negative 5.4 X.

    The latency we're discussing is a measurement of time (and up until Intel's amazing breakthrough it was always positive).
    This means that Intel has discovered tachyons, invented a time machine, and violated causality in general. Either that, or "journalists" and marketers don't know what they're doing.

  • Would expect my circa 2017 laptop purchase to have 1 TB of XPoint memory, dynamically used as RAM and 'SSD' (32G and 'nearly' 1TB), deliver a 5 to 10x increase in general performance, and cost relatively the same... Reasonably stupid idea?

    • by Anonymous Coward

      Only the "cost relatively the same" bit.

    • I figure it should be less than $5000 for the 3D-drive. (1,000GB/16GB * $75 == $4688). Although it is Intel, it might require your first born.
    • I mentioned this in a story a few days ago, but this brings it back to the forefront. The fastest SSDs have sequential write speeds about an order of magnitude slower than typical DDR3/DDR4 SDRAM. Increasing SSD speeds to be on par with DDR means you may actually need far less RAM than you did in the past because swap operations have very little cost. If endurance ticks up three orders of magnitude (as claimed), you might start considering dropping DRAM entirely for low end computers, perhaps with an incre

    • No, but you can expect your 2017/2018 laptop to have a general performance increase of maybe 10-15%, but loading things and searching for files will be 150%-500% faster depending on application, and suspend/resume may be silly fast.

  • How does performance compare to the fastest RAM drive? And what would be the estimated cost per IOPS of each? (RAM vs SSD vs Optane)

  • by TomGreenhaw ( 929233 ) on Wednesday November 04, 2015 @04:40PM (#50866509)
    Using XPoint as a successor to mass storage in my mind is short term thinking. Maybe its a quick way to sell the technology in the near term, but certainly not the best use case.

    We should get away from mass storage altogether and use this as replacement for RAM. It will take a rethinking of operating system structure, but promises to provide instant on computers with all programs and data always loaded and ready for immediate access. Database systems would immediately be orders of magnitude faster because all data is always ready for access.

    I for one will not miss virtual memory...
    • We should get away from mass storage altogether and use this as replacement for RAM.

      It doesn't have enough write endurance to do that. You could burn the stuff out with a FOR loop.

      • Damn, good point.

        I guess we're back to VM until endurance can be addressed.
        • Yes but when the Virtual Memory is on Optane it's going to have a shitload less impact. Also your idea could still have legs just allocate the static areas of your program to Optane and the dynamic to regular ram.
      • I got to thinking about this problem.The L1,2 and 3 cache on the CPU would largely mitigate this. The trend has been to increase the size of the on die CPU cache. Perhaps a vastly increased L3 cache in the gigabytes and a wear levelling non volatile ram controller would fully address this problem.
  • 250,000 IOPS per second, right up there with your LCD display, PDF format, and PIN number.

  • Yes, it is only six times faster- it has probably saturated the PCIe interface. Intel has already said as such that this would be an issue and that a new interface will be needed to accommodate the rams capabilities.

  • Am I the only one who thinks this technology is shockingly under-hyped? It eliminates a 50-year old performance anchor, neutralizes the biggest challenge in Computer Science, and makes a supercomputer out of an SoC.

    It came out of nowhere, but I believe Intel's claims. They wouldn't restart memory manufacturing in their own facilities if the tech wasn't ready for prime time.

    • No, I don't think so.
      At one write per second per memory cell, a device with 1000 times the endurance of NAND would last ... about a hundred days. RAM cells get updated on the order of milliseconds rather than seconds.

      This tech won't reduce the need for RAM. It's a better NAND, not a replacement for RAM. It may allow reduction in the quantity of RAM required where performance is not critical. It's very welcome, but no miracle cure for our computing ills.

      It may also enable a renaissance in Harvard-architect

      • I see that endurance spec now. DRAM appears to be safe until they can address that.

        I was imagining this would create a chip w/ 1 TB of L3. I guess that's a breakthrough or two away yet.

      • This tech won't reduce the need for RAM. It's a better NAND, not a replacement for RAM.

        If it performs anywhere new hype levels then it is indeed a replacement for the RAM that would otherwise be used to cache flash or disk. For a lot of use cases, that means most of the RAM in the box.

    • by twokay ( 979515 )
      Luckily it speak for its self. No need for the engineers to even speak to the marketing department. They just need to demo it.

If all else fails, lower your standards.

Working...