Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware

3D DRAM Spec Published 114

Lucas123 writes "The three largest memory makers announced the final specifications for three-dimensional DRAM, which is aimed at increasing performance for networking and high performance computing markets. Micron, Samsung and Hynix are leading the technology development efforts backed by the Hybrid Memory Cube Consortium (HMC). The Hybrid Memory Cube will stack multiple volatile memory dies on top of a DRAM controller. The result is a DRAM chip that has an aggregate bandwidth of 160GB/s, 15 times more throughput as standard DRAMs, while also reducing power by 70%. 'Basically, the beauty of it is that it gets rid of all the issues that were keeping DDR3 and DDR4 from going as fast as they could,' said Jim Handy, director of research firm Objective Analysis. The first versions of the Hybrid Memory Cube, due out in the second half of 2013, will deliver 2GB and 4GB of memory."
This discussion has been archived. No new comments can be posted.

3D DRAM Spec Published

Comments Filter:
  • the CPU vendors need to start stacking them onto their die.

    In 5 years your systems will be sold with fixed memory sizes, and the only way to upgrade is to upgrade CPUs.

    Stacked vias could also be used for other peripheral devices as well. (GPU?)

    • by Anonymous Coward

      If you want more RAM you just add more CPUs!!!

      WIN-WIN!!!

      • You think modern bloatware is inefficient and slow? Just wait until every machine is a NUMA machine!

    • by ArcadeMan ( 2766669 ) on Tuesday April 02, 2013 @03:13PM (#43341551)

      Mac users won't see any difference in 5 years... wink wink

      Posted from my Mac mini.

      • by kaws ( 2589929 )
        Hmm, tell that to my upgraded Macbook. I have 16gb of ram in mine. On the other hand, you're probably right that it will take a long time for the upgrades to show up in apple's store.
        • Like the top of the range Mac Pro's currently with their 2 year old Xeon CPU's.

          • Yesterday's server chip: today's desktop chip.

            Prime example: the AMD Athlon II 630. Couple years ago it was the dog's bollocks in server processors and you couldn't get one for less than a grand. Now it's the dog's bollocks of quad core desktop processors (nothing has changed except the name and the packaging) and my son bought one a month ago for change out of £100.

            The Core series processors you find in desktops and laptops these days all started life as identically-specced Xeon server processors.

            • Not really. The workstations of slightly cheaper price from Dell and others use current Xeon's.
              The Core i series processors never were server processors. They don't support ECC and have smaller caches than Xeon's.

      • by Issarlk ( 1429361 ) on Wednesday April 03, 2013 @01:27AM (#43345347)
        Since those are 3D chips, does that mean Apple's price for those RAM will be multiplied by 8 instead of 2?
    • Most CPU vendors do. This has been the standard way of shipping RAM for mobile devices for a long time (search package-on-package). It means that you don't need any motherboard traces for the RAM interface (which reduces cost), increases the number of possible physical connections (increasing bandwidth) and reduces the physical size. The down side is that it also means that the CPU (and GPU and DSP and whatever else is on the SoC) and the RAM have to share heat dissipation. If you put a DDR chip on top
      • by harrkev ( 623093 ) <{moc.liamg} {ta} {noslerrah.nivek}> on Tuesday April 02, 2013 @03:58PM (#43342035) Homepage

        HMC does not need to sit on top of a CPU. HMC is just a way to package a lot of memory into a smaller space and use fewer pins to talk to it. In fact, because of the smaller number of traces, you are likely to be able to put the HMC closer to the CPU than is currently possible. Also, since you are wiggling fewer wires, the I/O power will go down. Currently, one RAM channel can have two DIMMs in it, so the drivers have to be beefy enough to handle that posibility. Since HMC is based on serdes, it is a point-to-point link that can be lower power.

        I am sure that at speed ramps up that HMC will have its own heat problems, but sharing heat with the CPU is not one of them.

        • You might want to read the context of the discussion before you reply. My post was in reply to someone who said:

          And for faster performance the CPU vendors need to start stacking them onto their die.

      • Don't forget power. The frequencies memory runs it, it takes considerable power to drive an inter-chip trace. The big design constraints on portable devices are size and power.

      • This is more a practice w/ portable and wireless devices, where low consumption of real estate too is a major factor, and not just low consumption of power. The top package is typically larger than the bottom package, and all the signal pins are at the periphery. For a memory-on-CPU POP, the CPU is typically the bottom package, and its signals are all the core pins, while the memory is the top package, w/ signals at the periphery. Internally, the CPU and memory could be connected, and only the separate s
    • by ackthpt ( 218170 )

      the CPU vendors need to start stacking them onto their die.

      In 5 years your systems will be sold with fixed memory sizes, and the only way to upgrade is to upgrade CPUs.

      Stacked vias could also be used for other peripheral devices as well. (GPU?)

      IBM tried this with the PS/2 line. It fell flat on its face.

      • by forkazoo ( 138186 ) <wrosecrans AT gmail DOT com> on Tuesday April 02, 2013 @05:08PM (#43342699) Homepage

        To be fair, if somebody tried to sell something as locked down as the iPad is during the period when IBM first released the PS/2, it would have also flopped. The market has changed a lot since the 1980's. People who seriously upgrade their desktop are a rather small fraction of the total market for programmable things with CPU's.

      • by dj245 ( 732906 )

        the CPU vendors need to start stacking them onto their die.

        In 5 years your systems will be sold with fixed memory sizes, and the only way to upgrade is to upgrade CPUs.

        Stacked vias could also be used for other peripheral devices as well. (GPU?)

        IBM tried this with the PS/2 line. It fell flat on its face.

        This is news to me. I owned a PS/2 model 25 and model 80, and played around with a model 30. The model 80 used 72 pin SIMMs and even had a MCA expansion card for adding more SIMMs. The model 80 I bought (when their useful life was long over) was stuffed full of SIMMs. The model 25 used a strange (30 pin?) smaller SIMM, but it was upgradable. I forget what the model 30 had. Wikipedia seems to disagree with you [wikipedia.org] also.

        • I think the grandparent's point is that IBM tried to be all slick and new and proprietary with the PS/2 line and only suckers -- big corp, gov., banks -- bought into it.
          .

          I inherited all kinds of PS/2s...excrement. At this time they were being sold with a _12_ inch "billiard ball" monochrome IBM monitor. I eventually upgraded all of them to Zenith totally flat color monitors.

          PS/2s were wildly proprietary -- wee, we get to buy all new add-in cards! And performance dogs -- Model 30/286 FTW.

          A newb read

        • From what I recall, IBM's problem with the PS/2 brand was:

          1) They tried to shift everyone to MCA instead of the more open ISA/EISA, mostly because they were trying to stuff the genie back in the bottle and retake control of the industry.

          2) The lower end of the PS/2 line was garbage, which tarnished the upper-end.

          We had a few PS/2 server towers to play with. They were rather over-engineered and expensive, and the Intel / Compaq / AT&T commodity systems were faster and less expensive.
    • by gagol ( 583737 )
      I would like to see 4GB on die memory with regular DRAM controller for "swap" ;-)
    • the CPU vendors need to start stacking them onto their die.

      In 5 years your systems will be sold with fixed memory sizes, and the only way to upgrade is to upgrade CPUs.

      Stacked vias could also be used for other peripheral devices as well. (GPU?)

      Problem with this, of course, is that Intel wants to stop having slotted motherboards. Chips will be affixed to boards. Makes RAM upgrades a costly proposition, no?

  • by sinij ( 911942 ) on Tuesday April 02, 2013 @02:59PM (#43341385)
    Just like Star Trek movies, every other iteration of memory tech is a dud. I will just wait for holographic crystals.
  • Still waiting... (Score:4, Interesting)

    by Shinare ( 2807735 ) on Tuesday April 02, 2013 @03:04PM (#43341429)
    Where's my memristors?
    • by fyngyrz ( 762201 ) on Tuesday April 02, 2013 @04:03PM (#43342095) Homepage Journal

      Your memristors are with my ultracaps, flying car, and retroviral DNA fixes. I think they're all in the basement of the fusion reactor. Tended to by household service robots.

      • Ultracaps are readily available now. I've got a bank of 2600 farad jobbies. I use to power my Mad Science setup.

  • by Anonymous Coward

    Magnetic core menory was 3D. With something like 16k per cubic foot.

  • by ArcadeMan ( 2766669 ) on Tuesday April 02, 2013 @03:11PM (#43341531)

    Where I have seen 3D silicon [imageshack.us] before?

  • Latency? (Score:5, Insightful)

    by gman003 ( 1693318 ) on Tuesday April 02, 2013 @03:20PM (#43341631)

    Massive throughput is all well and good, very useful for many cases, but does this help with latency?

    Near as I can tell, DRAM latency has maybe halved since the Y2K era. Processors keep throwing more cache at the problem, but that only helps to a certain extent. Some chips even go to extreme lengths to avoid too much idle time while waiting on RAM ("HyperThreading", the UltraSPARC T* series). Getting better latency would probably help performance more than bandwidth.

    • Re:Latency? (Score:5, Informative)

      by harrkev ( 623093 ) <{moc.liamg} {ta} {noslerrah.nivek}> on Tuesday April 02, 2013 @03:50PM (#43341961) Homepage

      I have a passing familiarity with this technology. Everything communicates through a serial link. This means that you have the extra overhead of having to serialize the requests and transmit them over the channel. Then, the HMC memory has to de-serialize it before it can act on the request. Once the HMC had the data, it has to go back through the serializer and de-serializer again. I would be surprised if the latency was lower.

      On the other hand, the interface between the controller and the RAM itself if tighly controlled by the vendor since the controller is TOUCHING the RAM chips, instead of a couple of inches away like it is now, so that means that you shold be able to tighen timings up. All communication between the RAM and the CPU will be through serial links, so that means that the CPU needs a lot less pins for the same bandwidth. A dozen pins or so will do what 100 pins used to do before. This means that you can have either smaller/cheaper CPU packages, or more bandwidth for the same number of pins, or some trade-off in between.

      I, for one, welcome our new HMC overlords, and hope they do well.

      • Do you know why the target bandwidth for USR (15Gb) is lower than the bandwidth for SR (28Gb)?

        It seems strange that they would not take advantage of the shorter distance to increase the transfer speed.

    • This technology will not significantly affect memory latency, because DRAM latency is almost entirely driven by the row and column address selection inside the DRAM. The additional controller chip will likely increase average latency. However, this affect will be lessened because the higher bandwidth memory controllers will fill the processors cache more quickly. Also, the new DRAM chips will likely be fabricated on a denser manufacturing process, with many parallel columns, which will result in a minor

  • Submarine patent from Rambus [or someone else] surfacing in 3... 2... 1...

    • by ackthpt ( 218170 )

      Submarine patent from Rambus [or someone else] surfacing in 3... 2... 1...

      Yep. Hope they got signatories all notarized and everything.

  • frakking excellent news. For some time now the bottleneck has been in the memory bandwidth, not in the cpu/gpu processing power. This will help a lot problems like raytracing/pathtracing which are memory bound.

    thank you gods of the olympus!

    p.s. for some time now I've been trying to find again a .pdf file ( which I had found in the past, but lost it somehow ) with detailed explanations and calculations on the memory and flops requirements of raytracing, and how memory bandwidth is very low for such pro
  • by Anonymous Coward

    It will probably be around 5 years until we can buy these things like we buy DDR3. This industry is developing so fast, yet moving so slow.

  • by Anonymous Coward

    Hybrid Memory Cube exists in a 4-point world. Four corners are absolute and storage capacity is circumnavigated around Four compass directions North, South, East, and West. DRAM consortium spreads mistruths about Hybrid Memory Cube four point space. This cannot be refuted with conventional two dimensional DRAM.

  • by hamster_nz ( 656572 ) on Tuesday April 02, 2013 @04:37PM (#43342443)

    If you think that modern memory is simple send an address and read or write the data you are much mistaken.

    Have a read of What every programmer should know about memory [lwn.net] and get a simplified overview of what is going on. This too is only a simplification of what is really going on.

    To actually build a memory controller is another step up again - RAM chips have configuration registers that need to be set, and modules have a serial flash on them that holds device parameters. With high speed DDR memory you have to even make allowances for the different lengths in the PCB traces, and that is just the starting point - the devices still need to perform real-time calibrate to accurately capture the returning bits.

    Roll Serial Port Memory Technology [wikipedia.org]!

  • ? I mean, money? Psssh, there's people out there that have two GTX Titans ($1,000 cards) and would have more if there was room on the motherboard. Plus the vast reduction in power usage would be really useful for mobile high end stuff. Would love to grab a Nvidia 850 or whatever next year with 4 gigs of this onboard.
  • by Anonymous Coward

    How do they cool this apparatus?

Ocean: A body of water occupying about two-thirds of a world made for man -- who has no gills. -- Ambrose Bierce

Working...