3D DRAM Spec Published 114
Lucas123 writes "The three largest memory makers announced the final specifications for three-dimensional DRAM, which is aimed at increasing performance for networking and high performance computing markets. Micron, Samsung and Hynix are leading the technology development efforts backed by the Hybrid Memory Cube Consortium (HMC). The Hybrid Memory Cube will stack multiple volatile memory dies on top of a DRAM controller. The result is a DRAM chip that has an aggregate bandwidth of 160GB/s, 15 times more throughput as standard DRAMs, while also reducing power by 70%. 'Basically, the beauty of it is that it gets rid of all the issues that were keeping DDR3 and DDR4 from going as fast as they could,' said Jim Handy, director of research firm Objective Analysis. The first versions of the Hybrid Memory Cube, due out in the second half of 2013, will deliver 2GB and 4GB of memory."
And for faster performance (Score:1)
the CPU vendors need to start stacking them onto their die.
In 5 years your systems will be sold with fixed memory sizes, and the only way to upgrade is to upgrade CPUs.
Stacked vias could also be used for other peripheral devices as well. (GPU?)
Re: (Score:1)
If you want more RAM you just add more CPUs!!!
WIN-WIN!!!
Re: (Score:2)
You think modern bloatware is inefficient and slow? Just wait until every machine is a NUMA machine!
Re: (Score:2)
Most machines aren't multi-CPU machines.
Re: (Score:2)
The standard terminology appears to be that multi-core is not multi-CPU, and that's abundantly clear from the context of the thread.
The claim, you may recall, is that every multi-CPU machine sold is a NUMA architecture. That's patently untrue of almost all machines which feature multiple cores on one die.
Re:And for faster performance (Score:5, Funny)
Mac users won't see any difference in 5 years... wink wink
Posted from my Mac mini.
Re: (Score:2)
Re: (Score:2)
Like the top of the range Mac Pro's currently with their 2 year old Xeon CPU's.
Re: (Score:2)
Yesterday's server chip: today's desktop chip.
Prime example: the AMD Athlon II 630. Couple years ago it was the dog's bollocks in server processors and you couldn't get one for less than a grand. Now it's the dog's bollocks of quad core desktop processors (nothing has changed except the name and the packaging) and my son bought one a month ago for change out of £100.
The Core series processors you find in desktops and laptops these days all started life as identically-specced Xeon server processors.
Re: (Score:2)
Not really. The workstations of slightly cheaper price from Dell and others use current Xeon's.
The Core i series processors never were server processors. They don't support ECC and have smaller caches than Xeon's.
Re: (Score:2)
Intel begs to differ:
http://ark.intel.com/products/codename/29890/Clarkdale [intel.com]
Re: (Score:2)
All that link is telling me Clarkdale was a desktop CPU. I own one of those too, the i5-661.
It also tells me the Xeon Clarkdate has ECC and the i5 and i3 don't. The i-series also has integrated graphics, Xeon doesn't.
They branded one of them a Xeon for the workstation market, put ECC on it and took off the GPU.
Re:And for faster performance (Score:4, Funny)
Re: (Score:3)
Re:And for faster performance (Score:4, Interesting)
HMC does not need to sit on top of a CPU. HMC is just a way to package a lot of memory into a smaller space and use fewer pins to talk to it. In fact, because of the smaller number of traces, you are likely to be able to put the HMC closer to the CPU than is currently possible. Also, since you are wiggling fewer wires, the I/O power will go down. Currently, one RAM channel can have two DIMMs in it, so the drivers have to be beefy enough to handle that posibility. Since HMC is based on serdes, it is a point-to-point link that can be lower power.
I am sure that at speed ramps up that HMC will have its own heat problems, but sharing heat with the CPU is not one of them.
Re: (Score:2)
And for faster performance the CPU vendors need to start stacking them onto their die.
Re: (Score:2)
Don't forget power. The frequencies memory runs it, it takes considerable power to drive an inter-chip trace. The big design constraints on portable devices are size and power.
Re: (Score:2)
Re: (Score:3)
the CPU vendors need to start stacking them onto their die.
In 5 years your systems will be sold with fixed memory sizes, and the only way to upgrade is to upgrade CPUs.
Stacked vias could also be used for other peripheral devices as well. (GPU?)
IBM tried this with the PS/2 line. It fell flat on its face.
Re:And for faster performance (Score:4, Interesting)
To be fair, if somebody tried to sell something as locked down as the iPad is during the period when IBM first released the PS/2, it would have also flopped. The market has changed a lot since the 1980's. People who seriously upgrade their desktop are a rather small fraction of the total market for programmable things with CPU's.
Re: (Score:2)
the CPU vendors need to start stacking them onto their die.
In 5 years your systems will be sold with fixed memory sizes, and the only way to upgrade is to upgrade CPUs.
Stacked vias could also be used for other peripheral devices as well. (GPU?)
IBM tried this with the PS/2 line. It fell flat on its face.
This is news to me. I owned a PS/2 model 25 and model 80, and played around with a model 30. The model 80 used 72 pin SIMMs and even had a MCA expansion card for adding more SIMMs. The model 80 I bought (when their useful life was long over) was stuffed full of SIMMs. The model 25 used a strange (30 pin?) smaller SIMM, but it was upgradable. I forget what the model 30 had. Wikipedia seems to disagree with you [wikipedia.org] also.
The PS/2 line stunk (Score:2)
.
I inherited all kinds of PS/2s...excrement. At this time they were being sold with a _12_ inch "billiard ball" monochrome IBM monitor. I eventually upgraded all of them to Zenith totally flat color monitors.
PS/2s were wildly proprietary -- wee, we get to buy all new add-in cards! And performance dogs -- Model 30/286 FTW.
A newb read
Re: (Score:2)
1) They tried to shift everyone to MCA instead of the more open ISA/EISA, mostly because they were trying to stuff the genie back in the bottle and retake control of the industry.
2) The lower end of the PS/2 line was garbage, which tarnished the upper-end.
We had a few PS/2 server towers to play with. They were rather over-engineered and expensive, and the Intel / Compaq / AT&T commodity systems were faster and less expensive.
Re: (Score:3)
Re: (Score:2)
the CPU vendors need to start stacking them onto their die.
In 5 years your systems will be sold with fixed memory sizes, and the only way to upgrade is to upgrade CPUs.
Stacked vias could also be used for other peripheral devices as well. (GPU?)
Problem with this, of course, is that Intel wants to stop having slotted motherboards. Chips will be affixed to boards. Makes RAM upgrades a costly proposition, no?
Every other iteration of ram tech is a dud (Score:4, Funny)
Re:nothing new here (Score:5, Interesting)
I was working at SGI at the time, late 1991. The cheapest way to buy expansion memory was to buy Indigo's and throw out the rest of the computer. SGI was just feeling the first tickles of the commoditization of computer hardware, and was looking for ways to make their components unique (and keep them expensive.)
Re:nothing new here (Score:5, Insightful)
Re: (Score:2)
I worked for a company that needed more RDRAM in a server. We bought a second hand server, took out the RAM and threw away the rest. It was cheaper.
Re: (Score:2)
RDRAM was never cheap. I binned a Dell because it was cheaper to build a new machine with the required spec than to add a Gig of RDRAM to that thing.
Re: (Score:2)
Yeah, but how long till one of the partners run off and patent this new process and start suing everyone in sight? (Remember Rambus?)
Still waiting... (Score:4, Interesting)
Re:Still waiting... (Score:5, Funny)
Your memristors are with my ultracaps, flying car, and retroviral DNA fixes. I think they're all in the basement of the fusion reactor. Tended to by household service robots.
Re: (Score:2)
Ultracaps are readily available now. I've got a bank of 2600 farad jobbies. I use to power my Mad Science setup.
Ultracaps (Score:5, Insightful)
Um... yeah. No. I appreciate that what you have are considerably better than regular caps, but they're nowhere *near* the performance of what we keep being offered. Nanotube infused designs [mit.edu] with power to weight ratios around that of batteries, graphene designs [ucla.edu], etc. There's a huge wealth of applications waiting for them to hit somewhere around those marks. Electric cars, actual car battery replacements, cellphone power supplies that never die, backup systems for the house with peak powers far in excess of anything we have now but with comparable storage... the ultracap "breakthroughs" are as regular as any other kind (memristors, etc.) and the consistent no-show of actual commercially available units is also consistent. It's the flying car of electronic components, sigh. High voltage, high capacity, high vapor factor, lol.
Believe me, I've been following the whole ultracap thing for a while. I even keep an eye on EEStor, which I can assure you has been a stupendous exercise in fruitless waiting. As a ham with a full boat of offline powered goodies and the beginnings of a household able to run off backup systems, and more than a little willingness to buy an electric car, actual availability of ultracaps in what I call "the battery range" would truly light me up.
But that carrot is well and truly still out on the stick.
Re: (Score:3)
TI has a new range of super-low-power embedded chips which use FRAM, they are using it to replace flash and get faster writes, lower power consumption and higher write cycles before failure, so there's one new tech which made it to market and might become more popular over the coming years as it gets cheaper.
And even current-gen ultra-capacitors have a similar or better *power*/weight ratio as a battery - I'd like to see a 30g battery which can give 30A at 600V without damage to itself. It's the *energy*/we
Re: (Score:2)
Yes, overall energy capacity, not peak power. My bad.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Screw memristors, where is my damned Racetrack Memory, IBM?!
Right here, just not quite as small. [uni-klu.ac.at]
Re: (Score:2)
Didn't Tron steal those race tracks?
Not really the first time for this (Score:1)
Magnetic core menory was 3D. With something like 16k per cubic foot.
Oh noes (Score:3)
Where I have seen 3D silicon [imageshack.us] before?
Re: (Score:2)
Where I have seen 3D silicon before?
On Pamela Anderson?
Re:Dram (Score:4, Insightful)
So when can people running ddr1 or ddr2 expect to get some multilayer chips that vastly increase memory bandwidth in older systems?
Given that, for PC applications at any rate, the memory controller is built into either the motherboard or the CPU, there is likely to be a bottleneck there in any case. There would have been no reason for designers of memory controllers of the era to spec them out with the expectation of more than modest improvements.
Also, this '3D memory' stuff includes a memory controller with the DRAM dice stacked on top. To what, exactly, in a DDR2-using system are you going to connect a fancy new memory controller?
If you were a real high roller with a big cluster full of multi-socket hypertransport based systems or something, somebody might be moved to build some very, very, high performance memory modules that occupy CPU sockets; but that's a serious edge case. Most systems(even new ones) simply don't have a spare bus fast enough to hang substantially-faster-than-DDR3 RAM from.
Re: (Score:2)
This HMC stuff is going to require new CPUs with new memory controllers on board. On the plus side, for the same bandwidth, they will use a lot fewer pins.
Of course, the down-side is the early-adoper penalty of HMC being rather expensive. I expect that if it takes off, the price will drop rapidly.
Re: (Score:2)
Most of those pins in the CPU are for power. While the overall system power consumption can be lowered, its entirely moved to a single chip. They may need more pins. A 130W CPU with a core voltage of ~1V needs an average of 130A of current going though those pins. The peaks will be much, much higher. They'll need more pins to get more bandwidth in and out of the CPU+Memory chip too.
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
The power of a modern processor to get work done is dominated by cache misses. I mean by a factor of a hundred or more to one unless every bit you are computing lives in cache and nothing ever kicks your code or data's cache line out (including another line of code or data that you need. Because of the way that cache works you can't map every address to every line in cache).
Don't take my word for it though, take Cliff Click's: http://www.infoq.com/presentations/click-crash-course-modern-hardware [infoq.com]
Re: (Score:2)
The power of a modern processor to get work done is dominated by cache misses. I mean by a factor of a hundred or more to one unless every bit you are computing lives in cache and nothing ever kicks your code or data's cache line out (including another line of code or data that you need.
I happen to know that. What I meant by this was that it shouldn't matter all that much that latency is much worse than the throughput, because the burst transfers effectively amortize the latency cost. You're doing random reads against the L1 cache, not against the main memory. (If you organize your data so as to make the cache miss with every read, you're screwed anyway.)
Re: (Score:3)
Back in 1997, it was determined that ~90% of the benchmarks and customer applications (provided to us for testing purposes, the NDAs were amazing) used on PowerPC were completely dominated by cache misses. That means that if we knew how many times the processor touched a bus (data easily obtained in real time), we could be accurate to within 5% of what the performance would be using a spreadsheet calculation (Thanks, Dr. Jenevein) vs running the apps on a cycle accurate system simulation which could take w
Re: (Score:2)
You say newer, I was teaching people to use dcbt/icbt in PowerPC (and similar instructions in other architectures) to do that in the 90's (granted, they affected the L2 if one existed, no one had implemented an L3 on-die at that point). I love the instructions, and used the heck out of them when I hand optimized assembly code- not a career choice I would recommend at this point in time, btw. Compilers exist that can make use of them, fortunately, and they do help maintain the performance curve, but they d
Re: (Score:2)
Rude of me to reply to myself, but I should have added that when the vector units were added to PowerPC in the mid-late '90s, dst (data stream) instructions had the ability to indicate whether the fetches were transient or not and affect only the L1 if they were. gcc has supported the ability to do this since not long after the MPC7400 was released, IIRC.
Re: (Score:2)
It matters a great deal, and making sure burst transfers are effective is not always possible.
I do high performance calculations for a living. Knowing in advance what you will need in the future is a somewhat hard problem (and the basis of most modern optimization.)
The difference between main memory and cache is vast, if you can predict what you need far enough in advance to load it into cache that helps quite a bit, but realize that normally at best you are loading 4x what you really will need (which is t
Re: (Score:2)
Absolutely nothing. Hence, no change in slashdot editing quality. New here, are you?
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I'm an American, and like many others I too cringed when I read that. Are you implying that people in the Uber-glittery Eurozone never make grammatical errors?
It could simply mean that L1 and L2 speakers tend to make different classes of errors.
Re: (Score:1)
> ... about something as insignificant than that.
There. Broke that for you.
Re: (Score:2)
Latency? (Score:5, Insightful)
Massive throughput is all well and good, very useful for many cases, but does this help with latency?
Near as I can tell, DRAM latency has maybe halved since the Y2K era. Processors keep throwing more cache at the problem, but that only helps to a certain extent. Some chips even go to extreme lengths to avoid too much idle time while waiting on RAM ("HyperThreading", the UltraSPARC T* series). Getting better latency would probably help performance more than bandwidth.
Re: (Score:2)
Re: (Score:3)
Pointer chasing is the cannonical example. Trees, linked lists of every flavor, maps, many many more.
Even if your memory accesses are aligned you will still start to stream cache misses as soon as you are operating beyond the limits of cache, or start bouncing between cores and/or threads (snooping is cheap, but it isn't free and by the time you get there another thread might have kicked out your data).
Then there is synchronization between threads. Fences aren't free (far far from it, though some can be che
Re:Latency? (Score:5, Informative)
I have a passing familiarity with this technology. Everything communicates through a serial link. This means that you have the extra overhead of having to serialize the requests and transmit them over the channel. Then, the HMC memory has to de-serialize it before it can act on the request. Once the HMC had the data, it has to go back through the serializer and de-serializer again. I would be surprised if the latency was lower.
On the other hand, the interface between the controller and the RAM itself if tighly controlled by the vendor since the controller is TOUCHING the RAM chips, instead of a couple of inches away like it is now, so that means that you shold be able to tighen timings up. All communication between the RAM and the CPU will be through serial links, so that means that the CPU needs a lot less pins for the same bandwidth. A dozen pins or so will do what 100 pins used to do before. This means that you can have either smaller/cheaper CPU packages, or more bandwidth for the same number of pins, or some trade-off in between.
I, for one, welcome our new HMC overlords, and hope they do well.
Re: (Score:2)
Do you know why the target bandwidth for USR (15Gb) is lower than the bandwidth for SR (28Gb)?
It seems strange that they would not take advantage of the shorter distance to increase the transfer speed.
Re: (Score:3)
This technology will not significantly affect memory latency, because DRAM latency is almost entirely driven by the row and column address selection inside the DRAM. The additional controller chip will likely increase average latency. However, this affect will be lessened because the higher bandwidth memory controllers will fill the processors cache more quickly. Also, the new DRAM chips will likely be fabricated on a denser manufacturing process, with many parallel columns, which will result in a minor
Re:Latency? (Score:4, Informative)
This change of packaging allows greater memory density, and maybe higher transfer bandwidths. It will not alter the "first word" latency much, if at al.
Signal propagation over the wires isn't the problem, it is the way all DRAM works is.
- The DRAM arrays have "sense amplifiers", used to recover data data from the memory cell. The are much like op-amps, To start the cycle both inputs on the sense amplifier are charged to a middle level,
- The row is opened, dumping any stored charge into one side of the sense amplifier.
- The sense amplifiers are then saturate the signal to recover either a high or low level.
- At this point the data is ready to be accessed and transferred to the host (for a read), or values updated (for a write). It is this part that the memory interconnect performance really matters (e.g. Fast Page mode DRAM, DDR, DDR2, DDR3).
- One the read back and updates are completed then the row is closed, capturing the saturated voltage levels back in the cells.
And then the next memory cycle can begin again. On top of that you have have to add in refresh cycles, the rows are opened and closed on a schedule to ensure that the stored charge doesn't leak away, consuming time and adding to uneven memory latency.
Re: (Score:2)
NVidia has already announced "stacked dram" on their future "Volta" GPU's a couple of weeks ago.
streaks of bubbles in the water... (Score:2)
Submarine patent from Rambus [or someone else] surfacing in 3... 2... 1...
Re: (Score:2)
Submarine patent from Rambus [or someone else] surfacing in 3... 2... 1...
Yep. Hope they got signatories all notarized and everything.
This is what I call (Score:2)
thank you gods of the olympus!
p.s. for some time now I've been trying to find again a
5 years (Score:1)
It will probably be around 5 years until we can buy these things like we buy DDR3. This industry is developing so fast, yet moving so slow.
Hybrid Memory Cube has 4 Corner Time (Score:1, Funny)
Hybrid Memory Cube exists in a 4-point world. Four corners are absolute and storage capacity is circumnavigated around Four compass directions North, South, East, and West. DRAM consortium spreads mistruths about Hybrid Memory Cube four point space. This cannot be refuted with conventional two dimensional DRAM.
Memory is far more complex than you imagine. (Score:3)
If you think that modern memory is simple send an address and read or write the data you are much mistaken.
Have a read of What every programmer should know about memory [lwn.net] and get a simplified overview of what is going on. This too is only a simplification of what is really going on.
To actually build a memory controller is another step up again - RAM chips have configuration registers that need to be set, and modules have a serial flash on them that holds device parameters. With high speed DDR memory you have to even make allowances for the different lengths in the PCB traces, and that is just the starting point - the devices still need to perform real-time calibrate to accurately capture the returning bits.
Roll Serial Port Memory Technology [wikipedia.org]!
So we can expect (Hope?) to see this in GDDR6 spec (Score:1)
Re: (Score:2)
NVidia Volta, coming in 2016?
Cooling (Score:1)
How do they cool this apparatus?