Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
Hardware Technology

Samsung Begins Mass Production of World's Fastest DRAM (hothardware.com) 65

MojoKid writes: Late last year marked the introduction of High Bandwidth Memory (HBM) DRAM courtesy of AMD's Fury family of graphics cards, each of which sports 4GB of HBM. HBM allows these new AMD GPUs to tout an impressive 512GB/sec of memory bandwidth, but it's also just the first iteration of the new memory technology. Samsung has just announced that it has begun mass production of HBM2. Samsung's 4GB HBM2 package is built on a 20 nanometer process. Each package contains four 8-gigabit core dies built on top of a buffer die. Each 4GB HMB2 package is capable of delivering 256GB/sec of bandwidth, which is twice that of first generation HBM DRAM. In the example of NVIDIA's next gen GPU technology, code named Pascal, the new GPU will utilize HBM2 for its frame buffer memory. High-end consumer-grade Pascal boards will ship with 16GB of HBM2 memory (in four, 4GB packages), offering effective memory bandwidth of 1TB/sec (256GB/sec from each HMB2 package). Samsung is also reportedly readying 8GB HBM2 memory packages this year.
This discussion has been archived. No new comments can be posted.

Samsung Begins Mass Production of World's Fastest DRAM

Comments Filter:
  • Go AMD! (Score:5, Insightful)

    by Foxhoundz ( 2015516 ) on Tuesday January 19, 2016 @08:04PM (#51333575)
    You are the underdog and the warrior that has stood the wrath of Nvidia! I will keep buying your products until the bitter end!
    • Thank you for your sacrifice. I will shed a tear for you as I play on my GTX980ti.

    • Nvidia talks crap about amd and doesn't share anything it develops. Yet as soon as amd comes out with something new, they use it....ooh the irony.
  • that's NVIDIA-ah to you.
  • by AbRASiON ( 589899 ) * on Tuesday January 19, 2016 @08:26PM (#51333675) Journal

    The initial AMD Fury card was a bit of a disappointment, I mean it is quite fast for it's size and it's also quite fast for only 4GB memory onboard, but it didn't thrash the nvidia 980Ti it competes with, despite being a newer technology with more memory bandwidth.
    I haven't investigated (nor do I care to) as to /precisely/ why, but it may be the AMD GPU itself is simply not powerful enough to use that bandwidth effectively or the 4GB holding it back due to texture size.

    *THAT* being said, that's phase 1 of HBM, phase 2 is about to kick in this year for both AMD and nvidia and premium video cards will be utilising this technology in the high end for certain.

    The other thing that's frequently mentioned when these are brought up is that this on chip (or is it on package?) memory is going to be utilised in some of AMD's mid tier APU chips (the CPU / GPU combined ones) which should make some onboard video surprisingly damn good in the coming future. Perhaps not dedicated GPU good but may compete well with low to mid tier dedicated GPU's now.

    Also for compute functions for scientific stuff or whatever people use all that number crunching stuff with dedicated GPU's for, this will be far better. (Apparently it's similar to Intel Xeon Phi or some such? (Knights landing) https://en.wikipedia.org/wiki/... [wikipedia.org]

    I guess ultimately what has enabled this technology to exist is stacking ram (?) since they can fit 4GB of memory inside a single, very small chip.
    (Here you can see the existing stuff, 1GB in a single chip, the 4 smaller chips around the GPU) https://www.google.com.au/sear... [google.com.au] soon to be 4GB in presumably the same physical space and 8GB shortly

    It looks to me like stacked ram is the future in many things (SSD capacity booming due to this)
    It's all pretty exciting for the future of bandwidth, 1TB/s is pretty nice and I imagine it'll only go up from there.
    (I read some theories recently about 'stacking' CPU's too, although the heat may become an issue? but if they can lay out 48 layers of memory inside a chunk of silicon, why not lay out multiple processors) however that's for the smart people to figure out.
    Please read the replies to this post as I don't follow as closely as I used to and several pieces of information here might be slightly off.

    • ... may be the AMD GPU itself is simply not powerful enough to use that bandwidth effectively ...

      Building the right balance of compute units, memory capacity & bandwidth is a hard problem. Games developers will target high frame-rates on their target hardware, optimising or cutting back features until everything works well enough.

      We will have to wait and see if developers will find creative ways to use this bandwidth increase.

      • Game designers design for the lowest common denominator. This memory advance will be irrelevant until it's in 50% of the GPU's out there. This has been true for years and will remain true for years to come. No sane game developer would target the performance of a card that 90% of GPU's can't support.

        • I don't actually think it'll be relevant until it's available in the consoles since very few houses develop exclusively for PCs.
          That would delay the mainstream usage of HBM until the next console generation
    • by Anonymous Coward

      All that but you can't tell it's from its?

    • I also find it strange. In theory one Fury should be faster than a GTX980 but what happened was the opposite. I think the reason is the fact that the HBM chip behaves like a NAND flash chip: Slow on individual access but fast when you access multiple chips in parallel in a RAID0 like scheme
    • After price drop on Fury Nano, it costs like 980 (non TI) (499$) while handily beating it in most games, tiny form factor.

  • What I want is a Motherboard that will use one of these stacks to feed my 4-channel Intel socket 2011-3 processor.

    The current max for memory is ~25Gb/s/channel, so 4 channels from one device still leaves a lot on the table for improvement.

    Two processors could keep one busy... :)

  • by Anonymous Coward

    I found out the hard way that memory bandwidth was the bottleneck for this activity.

  • The latency for HBM and other technologies of its ilk are no better (even slightly worse) than DDR3.

    It's no good for large last level caches -- but 8 of those 8 TB stacks would make for a nice 64GB of RAM with 2TB/sec bandwidth. I'd like to see that connected to a good CPU.

  • I wonder how many kludges and "offloads" amd/nvidia will be able to pull off with this sort of external bandwith, increasing the performance/lowering costs in the process.

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...