Samsung Begins Mass Production of World's Fastest DRAM (hothardware.com) 65
MojoKid writes: Late last year marked the introduction of High Bandwidth Memory (HBM) DRAM courtesy of AMD's Fury family of graphics cards, each of which sports 4GB of HBM. HBM allows these new AMD GPUs to tout an impressive 512GB/sec of memory bandwidth, but it's also just the first iteration of the new memory technology. Samsung has just announced that it has begun mass production of HBM2. Samsung's 4GB HBM2 package is built on a 20 nanometer process. Each package contains four 8-gigabit core dies built on top of a buffer die. Each 4GB HMB2 package is capable of delivering 256GB/sec of bandwidth, which is twice that of first generation HBM DRAM. In the example of NVIDIA's next gen GPU technology, code named Pascal, the new GPU will utilize HBM2 for its frame buffer memory. High-end consumer-grade Pascal boards will ship with 16GB of HBM2 memory (in four, 4GB packages), offering effective memory bandwidth of 1TB/sec (256GB/sec from each HMB2 package). Samsung is also reportedly readying 8GB HBM2 memory packages this year.
Go AMD! (Score:5, Insightful)
Re:Go AMD! (Score:5, Informative)
As far as AMD vs. NVIDIA... competition breeds innovation, I'm happy they complete and make better packages to fight for the GPU crown.
AMD will likely be the first to ship HBM2 GPU too (Score:1)
It has been said that Pascal GP104 will use GDDR5X. If Nvidia repeats the cycle GP104 will be their flagship and big Pascal GP110 wont be GeForce ready until next cycle some time in 2017.
Which would make sense, considering that nVidia has no experience with HBM.
AMD - Hynix collaboration on HBM started a while ago, by the end of 2013 they've only "finalized HBM 3D memory", it took 2 more years to ship Fury series GPUs with HBM::
http://linustechtips.com/main/... [linustechtips.com]
Re: (Score:3)
Re: (Score:2)
Exactly my way.
The AMD CPUs are quite a bit cheaper than the Intel ones. And usually are enough for the games I want to play.
The temperatures this CPUs reach can be a bit frightening. Mine runs usually between 60C and 70C. But so far none left the magic smoke out.
My usage of nVidia Cards comes more from habit than from strong opinions about either of the companies.
Re: (Score:2)
It's a shame you had problems, but I haven't had any driver problems from my AMD cards. This includes the 6250-powered APU running Windows 10 that I am using to write this message.
Re: (Score:2)
It is supported under Windows 10 - I know because my little media PC at home has the E450 APU with the 6320, and it runs Windows 10 just fine.
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
Thank you for your sacrifice. I will shed a tear for you as I play on my GTX980ti.
Re: Go AMD! (Score:1)
Do amd have decent Linux support yet.
Not bought anything from them for about 9 years now after I made the switch to linux for my desktop and amd had near zero linux support.
Re: Go AMD! (Score:1)
Draym! (Score:2)
This stuff is meant to be pretty fancy. (Score:4, Informative)
The initial AMD Fury card was a bit of a disappointment, I mean it is quite fast for it's size and it's also quite fast for only 4GB memory onboard, but it didn't thrash the nvidia 980Ti it competes with, despite being a newer technology with more memory bandwidth. /precisely/ why, but it may be the AMD GPU itself is simply not powerful enough to use that bandwidth effectively or the 4GB holding it back due to texture size.
I haven't investigated (nor do I care to) as to
*THAT* being said, that's phase 1 of HBM, phase 2 is about to kick in this year for both AMD and nvidia and premium video cards will be utilising this technology in the high end for certain.
The other thing that's frequently mentioned when these are brought up is that this on chip (or is it on package?) memory is going to be utilised in some of AMD's mid tier APU chips (the CPU / GPU combined ones) which should make some onboard video surprisingly damn good in the coming future. Perhaps not dedicated GPU good but may compete well with low to mid tier dedicated GPU's now.
Also for compute functions for scientific stuff or whatever people use all that number crunching stuff with dedicated GPU's for, this will be far better. (Apparently it's similar to Intel Xeon Phi or some such? (Knights landing) https://en.wikipedia.org/wiki/... [wikipedia.org]
I guess ultimately what has enabled this technology to exist is stacking ram (?) since they can fit 4GB of memory inside a single, very small chip.
(Here you can see the existing stuff, 1GB in a single chip, the 4 smaller chips around the GPU) https://www.google.com.au/sear... [google.com.au] soon to be 4GB in presumably the same physical space and 8GB shortly
It looks to me like stacked ram is the future in many things (SSD capacity booming due to this)
It's all pretty exciting for the future of bandwidth, 1TB/s is pretty nice and I imagine it'll only go up from there.
(I read some theories recently about 'stacking' CPU's too, although the heat may become an issue? but if they can lay out 48 layers of memory inside a chunk of silicon, why not lay out multiple processors) however that's for the smart people to figure out.
Please read the replies to this post as I don't follow as closely as I used to and several pieces of information here might be slightly off.
Re: (Score:3)
... may be the AMD GPU itself is simply not powerful enough to use that bandwidth effectively ...
Building the right balance of compute units, memory capacity & bandwidth is a hard problem. Games developers will target high frame-rates on their target hardware, optimising or cutting back features until everything works well enough.
We will have to wait and see if developers will find creative ways to use this bandwidth increase.
Re: (Score:3)
Game designers design for the lowest common denominator. This memory advance will be irrelevant until it's in 50% of the GPU's out there. This has been true for years and will remain true for years to come. No sane game developer would target the performance of a card that 90% of GPU's can't support.
Re: (Score:2)
That would delay the mainstream usage of HBM until the next console generation
Re: (Score:1)
All that but you can't tell it's from its?
Re: (Score:2)
Fury Nano rocks after price drop (499$) (Score:1)
After price drop on Fury Nano, it costs like 980 (non TI) (499$) while handily beating it in most games, tiny form factor.
HBM memory... (Score:2)
What I want is a Motherboard that will use one of these stacks to feed my 4-channel Intel socket 2011-3 processor.
The current max for memory is ~25Gb/s/channel, so 4 channels from one device still leaves a lot on the table for improvement.
Two processors could keep one busy... :)
Re: (Score:3)
Re: (Score:2)
Better FDTD simulations (Score:1)
I found out the hard way that memory bandwidth was the bottleneck for this activity.
Amazing bandwidth, no better latency. (Score:5, Interesting)
The latency for HBM and other technologies of its ilk are no better (even slightly worse) than DDR3.
It's no good for large last level caches -- but 8 of those 8 TB stacks would make for a nice 64GB of RAM with 2TB/sec bandwidth. I'd like to see that connected to a good CPU.
Chips made with the tech in mind. (Score:2)
I wonder how many kludges and "offloads" amd/nvidia will be able to pull off with this sort of external bandwith, increasing the performance/lowering costs in the process.