Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Graphics Hardware

NVIDIA Unveils Next Gen Pascal GPU With Stacked 3D DRAM and GeForce GTX Titan Z 110

MojoKid (1002251) writes "NVIDIA's 2014 GTC (GPU Technology Conference) kicked off today in San Jose California, with NVIDIA CEO Jen-Hsun Huang offering up a healthy dose of new information on next generation NVIDIA GPU technologies. Two new NVIDIA innovations will be employed in their next-gen GPU technology, now know by its code named 'Pascal." First, there's a new serial interconnect known as NVLink for GPU-to-CPU and GPU-to-GPU communication. Though details were sparse, apparently NVLink is a serial interconnect that employs differential signaling with embedded clock and it allows for unified memory architectures and eventually cache coherency. It's similar to PCI Express in terms of command set and programming model but NVLink will offer a massive 5 — 12X boost in bandwidth up to 80GB/sec.

The second technology to power NVIDIA's forthcoming Pascal GPU is 3D stacked DRAM technology.The technique employs through-silicon vias that allow the ability to stack DRAM die on top of each other and thus provide much more density in the same PCB footprint for the DRAM package. Jen-Hsun also used his opening keynote to show off NVIDIA's most powerful graphics card to date, the absolutely monstrous GeForce GTX Titan Z. The upcoming GeForce GTX Titan Z is powered by a pair of GK110 GPUs, the same chips that power the GeForce GTX Titan Black and GTX 780 Ti. All told, the card features 5,760 CUDA cores (2,880 per GPU) and 12GB of frame buffer memory—6GB per GPU. NVIDIA also said that the Titan Z's GPUs are tuned to run at the same clock speed, and feature dynamic power balancing so neither GPU creates a performance bottleneck."
This discussion has been archived. No new comments can be posted.

NVIDIA Unveils Next Gen Pascal GPU With Stacked 3D DRAM and GeForce GTX Titan Z

Comments Filter:
  • by Anonymous Coward on Wednesday March 26, 2014 @10:15AM (#46583733)

    Every Nvidia GPU we've purchased for CUDA compute tasks in the past five years has crashed frequently under load.

  • by Rhys ( 96510 ) on Wednesday March 26, 2014 @11:00AM (#46584115)

    Even if it wasn't 78k (and it isn't, they listed it at 3k if you RTFA) that is a steal if your compute load can actually extract the 8 Tflop from it -- assuming that's the 64-bit flop, not the 32-bit flop.

    I mean, slightly under 10 years ago I know a big-10 university that paid 3000k for a cluster with less Tflops (around 7, but not all in one computation/network).

    I guess I shouldn't be surprised, another few years that should be in laptops or phones.

  • by Beamboom ( 2692671 ) on Wednesday March 26, 2014 @11:03AM (#46584137)
    "[...] it allows for unified memory architectures and eventually cache coherency"

    Isn't this more or less precisely how the PS4 is designed? If my memory(!) servers me correctly I'd call this a pretty good design move by Sony, something that should potentially bode well for the longevity of that console, once the games are designed for this type of architecture.

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...