Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Graphics Operating Systems Software Hardware Build Technology

NVIDIA Drops Surprise Unveiling of Pascal-Based GeForce GTX Titan X (hothardware.com) 134

MojoKid writes from a report via HotHardware: Details just emerged from NVIDIA regarding its upcoming powerful, Pascal-based Titan X graphics card, featuring a 12 billion transistor GPU, codenamed GP102. NVIDIA is obviously having a little fun with this one and at an artificial intelligence (AI) meet-up at Stanford University this evening, NVIDIA CEO Jen-Hsun Huang first announced, and then actually gave away a few brand-new, Pascal-based NVIDIA TITAN X GPUs. Apparently, Brian Kelleher, one of NVIDIA's top hardware engineers, made a bet with NVIDIA CEO Jen-Hsun Huang, that the company could squeeze 10 teraflops of computing performance out of a single chip. Jen-Hsun thought that was not doable in this generation of product, but apparently, Brian and his team pulled it off. The new Titan X is powered by NVIDIA's largest GPU -- the company says it's actually the biggest GPU ever built. The Pascal-based GP102 features 3,584 CUDA cores, clocked at 1.53GHz (the previous-gen Titan X has 3,072 CUDA cores clocked at 1.08GHz). The specifications NVIDIA has released thus far include: 12-billion transistors, 11 TFLOPs FP32 (32-bit floating point), 44 TOPS INT8 (new deep learning inferencing instructions), 3,584 CUDA cores at 1.53GHz, and 12GB of GDDR5X memory (480GB/s). The new Titan X will be available August 2nd for $1,200 direct from NVIDIA.com.
This discussion has been archived. No new comments can be posted.

NVIDIA Drops Surprise Unveiling of Pascal-Based GeForce GTX Titan X

Comments Filter:
  • by Anonymous Coward on Friday July 22, 2016 @08:03AM (#52559993)

    I thought it had been surpassed by C++, but this is great for everyone.

    • I thought it had been surpassed by C++, but this is great for everyone.

      Nah it just morphed into Delphi and got hacked to death

      • I thought it had been surpassed by C++, but this is great for everyone.

        Nah it just morphed into Delphi and got hacked to death

        Wait, I thought that Modula-2 was the successor...

        • I thought it had been surpassed by C++, but this is great for everyone.

          Nah it just morphed into Delphi and got hacked to death

          Wait, I thought that Modula-2 was the successor...

          Wait for it....

          Oberon

          • Oberon

            That's the hostname of my FreeNAS file server. Not sure why my file server is relevant to this discussion.

    • Thanks Nvidia (Score:1, Flamebait)

      by freeze128 ( 544774 )
      I had to dig two layers into the story to find out that this has nothing to do with the PASCAL programming language. Pascal is the name of their GPU architecture. Thanks Nvidia, for using a name that is already well established in the tech community.
      • Re:Thanks Nvidia (Score:5, Insightful)

        by cdrudge ( 68377 ) on Friday July 22, 2016 @10:13AM (#52560787) Homepage

        Nvidia uses scientists names for their products: Tesla, Fermi, Kepler, Maxwell, Pascal and Volta (next version after Pascal).

        If the leading edge of "consumer" graphic cards is of any interest to someone, they'd know what Pascal was since it's been announced now for over 2 years.

        • It still sounds silly in this case, even if you're familiar with the industry. They should just name them after numbers to avoid confusion. When version 7 of 9 comes out, it'll be sweet.

        • I just have to say... Scientists have awesome names.

          • by cdrudge ( 68377 )

            Well I'm sure Nvidia could have named it after scientists named Smith, Jones, Doe, Smith (a different one), and Johnson, but those names aren't very exciting.

      • I too thought it had something to do with the programming language. I remember taking C and Pascal the same semester in college. Big mistake!
        Why not just refer to it as "Pascal architecture" in the story summary? I get that people who follow this might know that is what was meant, but not everyone spends thousands of dollars on video cards or follows things like this. I would think that for a summary story, it would be a little more front-page-friendly. But then again, I prefer the /. of old.

  • by Provocateur ( 133110 ) <shediedNO@SPAMgmail.com> on Friday July 22, 2016 @08:09AM (#52560003) Homepage

    So bloody fast it actually made the Kessel run in 12 parsecs.

  • But... (Score:3, Interesting)

    by dmgxmichael ( 1219692 ) on Friday July 22, 2016 @08:21AM (#52560073) Homepage
    ... can it run Crysis?
  • Heat? (Score:2, Interesting)

    by twmcneil ( 942300 )
    The card looks like it will fit in a standard case, but the cooling tower will be the size of a small house.
    • The card looks like it will fit in a standard case, but the cooling tower will be the size of a small house.

      Sorry, this isn't an AMD card...

      • by Shinobi ( 19308 )

        Some people here still quote power use/heat output figures from the Fermi architecture, especially the 4x0 series which debuted in 2010...

    • by aliquis ( 678370 )

      but the cooling tower will be the size of a small house.

      You do realize the actual cooler is on the picture in the article right?
      http://hothardware.com/Content... [hothardware.com]
      And that it's part of the card which fit in some case (standards for graphics card lengths I'm unaware off, fit on a mini-ITX board a possible exception.)

      A family of mice or small snakes or some small fishes I guess.

  • by CorporalKlinger ( 871715 ) on Friday July 22, 2016 @08:23AM (#52560091)
    "By the time Skynet became self-aware it had spread into millions of computer servers across the planet. Ordinary computers in office buildings, dorm rooms; everywhere. It was software; in cyberspace. There was no system core; it could not be shutdown. The attack began at 6:18 PM, just as he said it would" ...at an artificial intelligence conference in California. Judgment day has arrived. Now we just need to perfect time travel.
    • by Yvan256 ( 722131 )

      There's a flaw in your logic: Skynet [blogspot.com] is already a time-traveler. [dailymail.co.uk]

    • by aliquis ( 678370 )

      Now we just need to perfect time travel.

      Perfect?

      We don't even know if it's in the future to past direction.

    • by aliquis ( 678370 )

      Or better:

      Now we just need to perfect time travel.

      Perfect?
      We don't even have any proof that it would even be possible to travel backwards in time!

      My other post made it a "maybe" whereas this one is a "not as far as we know." I remember seeing something likely here on Slashdot which if true / suggested it wouldn't be possible. But maybe that's not proven either. Maybe it have to be perfect once in use but currently it's less about perfecting it and more about (not) being capable to do it at all.

  • by Anonymous Coward

    For the ones wondering: http://www.nvidia.com/object/gpu-architecture.html

  • basically a supercomputer on a card. I'd be *really* interested in finding out if those cores are individually addressable, etc and the memory setup. I remember the computer my Dad did his PhD calculations on -- an IBM 704 with memory expansion to a whopping 48K

    • by Anonymous Coward

      It really is a supercomputer on a card, complete with a device driver that's effectively a batch job scheduling system. There's a lot of limitations though, things like sets of threads all need to run the same code, and as much as possible, follow the same branches in code. It's not like having a couple thousand individual CPUs that you can program by any stretch.

      This tends to work well for things like image processing or machine learning, and not nearly as well for tasks like sorting or searching.

  • Great Scott! 1.21 jiga flops! 1.21 jiga flops?! What was I thinking!!

    Oh, it was on a dare...

  • Why GDDR5X? HBM2 triples the memory throughput. If they want a monster card that is overkill for today, it should at least incorporate the king of memory buses.

    • Nvidia has yet to use HBM in a major product let alone HBM2. HBM2 isn't volume ready quite yet, and AMD allegedly has some form of "dibs" on getting priority on production from at least Hynix.

    • Cost would be my bet. HBM2 is set to be used in their Tesla P100 GPU, which will have a far higher price. They probably couldn't get the manufacturing costs low enough for a consumer part with both the monstrous GPU die and HBM.

Be sociable. Speak to the person next to you in the unemployment line tomorrow.

Working...