Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Hardware

3D DRAM Spec Published 114

Lucas123 writes "The three largest memory makers announced the final specifications for three-dimensional DRAM, which is aimed at increasing performance for networking and high performance computing markets. Micron, Samsung and Hynix are leading the technology development efforts backed by the Hybrid Memory Cube Consortium (HMC). The Hybrid Memory Cube will stack multiple volatile memory dies on top of a DRAM controller. The result is a DRAM chip that has an aggregate bandwidth of 160GB/s, 15 times more throughput as standard DRAMs, while also reducing power by 70%. 'Basically, the beauty of it is that it gets rid of all the issues that were keeping DDR3 and DDR4 from going as fast as they could,' said Jim Handy, director of research firm Objective Analysis. The first versions of the Hybrid Memory Cube, due out in the second half of 2013, will deliver 2GB and 4GB of memory."
This discussion has been archived. No new comments can be posted.

3D DRAM Spec Published

Comments Filter:
  • by Anonymous Coward on Tuesday April 02, 2013 @04:14PM (#43341565)

    For Christ's sake - it's even in the summaries now.

    "15 times more throughput AS standard DRAMs"

    It's "15 times more throughput THAN standard DRAMs", you illiterate cretins...

    What the hell happened to the American education system in the last ten years or so? It seems like half of you ignoramuses don't know what any of your prepositions mean. Just put in 'to', 'on', 'then', 'that', 'than', etc.etc. at random, that'll do. Near enough.

  • Re:Latency? (Score:5, Informative)

    by harrkev ( 623093 ) <kevin@harrelson.gmail@com> on Tuesday April 02, 2013 @04:50PM (#43341961) Homepage

    I have a passing familiarity with this technology. Everything communicates through a serial link. This means that you have the extra overhead of having to serialize the requests and transmit them over the channel. Then, the HMC memory has to de-serialize it before it can act on the request. Once the HMC had the data, it has to go back through the serializer and de-serializer again. I would be surprised if the latency was lower.

    On the other hand, the interface between the controller and the RAM itself if tighly controlled by the vendor since the controller is TOUCHING the RAM chips, instead of a couple of inches away like it is now, so that means that you shold be able to tighen timings up. All communication between the RAM and the CPU will be through serial links, so that means that the CPU needs a lot less pins for the same bandwidth. A dozen pins or so will do what 100 pins used to do before. This means that you can have either smaller/cheaper CPU packages, or more bandwidth for the same number of pins, or some trade-off in between.

    I, for one, welcome our new HMC overlords, and hope they do well.

  • Re:Latency? (Score:4, Informative)

    by hamster_nz ( 656572 ) on Tuesday April 02, 2013 @06:08PM (#43342693)

    This change of packaging allows greater memory density, and maybe higher transfer bandwidths. It will not alter the "first word" latency much, if at al.

    Signal propagation over the wires isn't the problem, it is the way all DRAM works is.

    - The DRAM arrays have "sense amplifiers", used to recover data data from the memory cell. The are much like op-amps, To start the cycle both inputs on the sense amplifier are charged to a middle level,
    - The row is opened, dumping any stored charge into one side of the sense amplifier.
    - The sense amplifiers are then saturate the signal to recover either a high or low level.
    - At this point the data is ready to be accessed and transferred to the host (for a read), or values updated (for a write). It is this part that the memory interconnect performance really matters (e.g. Fast Page mode DRAM, DDR, DDR2, DDR3).
    - One the read back and updates are completed then the row is closed, capturing the saturated voltage levels back in the cells.

    And then the next memory cycle can begin again. On top of that you have have to add in refresh cycles, the rows are opened and closed on a schedule to ensure that the stored charge doesn't leak away, consuming time and adding to uneven memory latency.

UNIX is hot. It's more than hot. It's steaming. It's quicksilver lightning with a laserbeam kicker. -- Michael Jay Tucker

Working...