Forgot your password?
typodupeerror
Graphics Hardware

NVIDIA Unveils Next Gen Pascal GPU With Stacked 3D DRAM and GeForce GTX Titan Z 110

Posted by Unknown Lamer
from the reinforcing-the-memory-hierarchy dept.
MojoKid (1002251) writes "NVIDIA's 2014 GTC (GPU Technology Conference) kicked off today in San Jose California, with NVIDIA CEO Jen-Hsun Huang offering up a healthy dose of new information on next generation NVIDIA GPU technologies. Two new NVIDIA innovations will be employed in their next-gen GPU technology, now know by its code named 'Pascal." First, there's a new serial interconnect known as NVLink for GPU-to-CPU and GPU-to-GPU communication. Though details were sparse, apparently NVLink is a serial interconnect that employs differential signaling with embedded clock and it allows for unified memory architectures and eventually cache coherency. It's similar to PCI Express in terms of command set and programming model but NVLink will offer a massive 5 — 12X boost in bandwidth up to 80GB/sec.

The second technology to power NVIDIA's forthcoming Pascal GPU is 3D stacked DRAM technology.The technique employs through-silicon vias that allow the ability to stack DRAM die on top of each other and thus provide much more density in the same PCB footprint for the DRAM package. Jen-Hsun also used his opening keynote to show off NVIDIA's most powerful graphics card to date, the absolutely monstrous GeForce GTX Titan Z. The upcoming GeForce GTX Titan Z is powered by a pair of GK110 GPUs, the same chips that power the GeForce GTX Titan Black and GTX 780 Ti. All told, the card features 5,760 CUDA cores (2,880 per GPU) and 12GB of frame buffer memory—6GB per GPU. NVIDIA also said that the Titan Z's GPUs are tuned to run at the same clock speed, and feature dynamic power balancing so neither GPU creates a performance bottleneck."
This discussion has been archived. No new comments can be posted.

NVIDIA Unveils Next Gen Pascal GPU With Stacked 3D DRAM and GeForce GTX Titan Z

Comments Filter:
  • by Anonymous Coward

    And you'll never need to turn on the heater again!

  • by Anonymous Coward on Wednesday March 26, 2014 @09:14AM (#46583731)

    but switching to Pascal is a step in the right direction if a bit retro, I guess.

  • by Anonymous Coward on Wednesday March 26, 2014 @09:15AM (#46583733)

    Every Nvidia GPU we've purchased for CUDA compute tasks in the past five years has crashed frequently under load.

    • by Chas (5144) on Wednesday March 26, 2014 @10:22AM (#46584283) Homepage Journal

      But is this a failure of the implementation or a failure of the installation?

      It's really easy to say "It crashes all the time".

      But it's also really easy to leave out "Our compute cluster space is running at 100+ degrees ambient and our power distribution is shoddy."
      It's also really easy to leave out things like "Our no-name, cut-rate motherboards, memory and PSUs probably aren't up to the task of running these things at maximum utilization."

      • by lgw (121541)

        Motherboards are a real issue as GPUs run quite hot under load, and so many motherboards start cracking under the thermal stress. Good for 6-12 months, and then they start crashing frequently under load. And there's no solid guide to the good ones (it's not the sort of thing you can test in a week), which is very frustrating for hobby system builders.

  • by Anonymous Coward
    Wouldn't that make it 4D?
  • by Viol8 (599362) on Wednesday March 26, 2014 @09:28AM (#46583833)

    Either its the BEGINning of a new era in GPUs , or its called Pascal because its actually French and will go on strike the minute its asked to render a game more complex than Flappy Bird.

    • by Anonymous Coward

      Borland Delphi Object Pascal 7.1 to be precise (?) -> http://start64.com/index.php?o... [start64.com]

      * :)

      (Great language & IDE - does everything pretty much that C++ can do (except multiple inheritance) & as EASILY as VB... best of BOTH worlds, in 1 tool!)

      APK

      P.S.=> It's my favorite, & has been, since it "stole me away" from MSVC++ &/or MSVB circa 1997, when Delphi "knocked the chocolate" out of them BOTH in 7/10 tests (especially MATH & STRING work, where it literally DOUBLED & THEN SOME it

      • Re: (Score:2, Flamebait)

        by Viol8 (599362)

        For you and all the other clueless idiots out there who didn't get it - Pascal is a French name , the language was named after Blaise Pascal FFS.

  • by sribe (304414) on Wednesday March 26, 2014 @09:43AM (#46583981)

    Like the things that they announced last year, which have simply disappeared off the roadmap without mention. In other words, they are falling behind schedule, and trying desperately to spin this as ongoing progress [semiaccurate.com].

    • Actually, within the PC Enthusiast community, it's believed they are not behind schedule. They just have little reason to push things out quickly due to a lack of competition and need for the technologies themselves. i.e. Neither AMD nor games these days are at a point that actively require the technologies they have (had) planned to be released either to give AMD a run for their money, or to actually make the games playable at our current resolutions. 1080p/1440p are the currently most used resolutions wit

      • That doesn't agree with what I've read. Supposedly Nvidia is running behind schedule but it isn't their fault. The problem is that TSMC can't deliver on their promises so everything is being pushed out. Ideally both AMD and Nvidia would have 20NM (or what TSMC calls 20NM) GPUs out on the market already but TSMC has had issues with bringing 20NM production online. That's forcing everyone using them to rethink their planned introduction of new products.

        • Interesting. Thanks for that info. It could just be some bias in the community (which, of communities, the PC Enthusiast one is probably the most guilty of). I need to further expand my horizons.

      • The 750 Ti running a 128 bit memory bus is pretty clear evidence they are holding back. If they were really concerned about pushing the market, it would have a wider memory bus.
    • by edxwelch (600979)

      Similiarly there's no mention of a Tegra K1 SoC with LTE. I think this is a sign that Nvidia will shortly abandon the smartphone market. Without any CDMA, or TD-SCDMA in their modem support they can't sell in Chinese or American markets

  • by Beamboom (2692671) on Wednesday March 26, 2014 @10:03AM (#46584137)
    "[...] it allows for unified memory architectures and eventually cache coherency"

    Isn't this more or less precisely how the PS4 is designed? If my memory(!) servers me correctly I'd call this a pretty good design move by Sony, something that should potentially bode well for the longevity of that console, once the games are designed for this type of architecture.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      The PS4 has 8GB of unified GDDR5, and the GPU has 180GB/s bandwidth to that. Cache coherency with CPU is possible but reduces bandwidth to 10GB/s - quite a difference. It's cache coherent today, not eventually.

      PS4 GPU has 1,152 scalar ALUs (72 x 16-way SIMD); I'm not sure how that compares to "CUDA cores", but it sounds like the Titan Z has 2x the memory, 1/2 the bandwidth, 4x the ALUs ... and 8x the price.

    • You mean how AMD's unified GPU & CPU chips are designed? (The PS4 uses an AMD processor, and I believe that Sony had much less design influence it than it did on Cell.)

      Yes, it seems like nVidia might finally be starting to slowly catch up...

    • by gman003 (1693318)

      Yes, as is the Xbox One and the latest APUs.

      AMD has been focusing on tight CPU/GPU integration. They're pretty far along with it.

      Nvidia was primarily focusing on power efficiency, and they're pretty good on that front right now. Their actual mobile stuff is selling like crap because they aren't quite there yet, but compare Kepler to GCN and you'll see how efficient it is. Maxwell is supposedly more so, but they haven't launched high-end parts yet so we can't really judge yet.

      Nvidia did have CPU/GPU integrat

  • All told, the card features 5,760 CUDA cores (2,880 per GPU) and 12GB of frame buffer memory—6GB per GPU

    So... does that mean that the graphics card I just bought is outdated already??

  • How about getting your drivers to work?

  • Warning: Pregnant women, the elderly, and children under 10 should avoid prolonged exposure to the GTX Titan-Z.
    Caution: the GTX Titan-Z may suddenly accelerate to dangerous speeds.
    the GTX Titan-Z contains a liquid core, which, if exposed due to rupture, should not be touched, inhaled, or looked at.
    Do not use the GTX Titan-Z on concrete.
    Discontinue use of the GTX Titan-Z if any of the following occurs:
    itching
    vertigo
    dizziness
    tingling in extremities
    loss of balance or coordination
    slurred speech
    tempor
  • I remember buying memory modules where the memory was stacked way the hell back when.

    The thing that's interesting about this iteration is the fact that pass-throughs have been built straight into the silicon.

    • I have a memory card from an original IBM PC from 1982, which has stacked memory chips. In fact, each pair of chips has ALL of their pins wired to the same contacts on the PCB. Although I have been unwilling to take apart the board to verify, this leads me to believe that the chip on top and the chip underneath are different. I'm guessing one of them has an inverter on an address line, so it will respond to even addresses, while the other responds to odd addresses.
  • If I were picking a codename for my next product, I'm not sure I'd pick the name of a language famous for being useless for real work without vendor extensions.

    • by TeknoHog (164938)

      If I were picking a codename for my next product, I'm not sure I'd pick the name of a language famous for being useless for real work without vendor extensions.

      Especially knowing how Nvidia performs with OpenCL vs. their vendor specific solution...

  • by fxj (267709) on Wednesday March 26, 2014 @12:36PM (#46585713)

    This is hillarious!
    Everybody here thinks it is named after the programming language.

    Tesla, Fermi, Kepler, Pascal,...

    What do they all have in common???

    yeah: a car, a satelite and a programming language

Pound for pound, the amoeba is the most vicious animal on earth.

Working...