Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Graphics Hardware Technology

NVIDIA Unveils 2 Petaflop DGX-2 AI Supercomputer With 32GB Tesla V100, NVSwitch Tech 41

bigwophh writes from a report via HotHardware: NVIDIA CEO Jensen Huang took to the stage at GTC today to unveil a number of GPU-powered innovations for machine learning, including a new AI supercomputer and an updated version of the company's powerful Tesla V100 GPU that now sports a hefty 32GB of on-board HBM2 memory. A follow-on to last year's DGX-1 AI supercomputer, the new NVIDIA DGX-2 can be equipped with double the number of Tesla V100 processing modules for double the GPU horsepower. The DGX-2 can also have four times the available memory space, thanks to the updated Tesla V100's larger 32GB of memory. NVIDIA's new NVSwitch technology is a fully crossbar GPU interconnect fabric that allows NVIDIA's platform to scale to up to 16 GPUs and utilize their memory space contiguously, where the previous DGX-1 NVIDIA platform was limited to 8 total GPU complexes and associated memory. NVIDIA claims NVSwitch is five times faster than the fastest PCI Express switch and offers an aggregate 2.4TB per second of bandwidth. A new Quadro card was also announced. Called the Quadro GV100, it too is being powered by Volta. The Quadro GV100 packs 32GB of memory and supports NVIDIA's recently announced RTX real-time ray tracing technology.
This discussion has been archived. No new comments can be posted.

NVIDIA Unveils 2 Petaflop DGX-2 AI Supercomputer With 32GB Tesla V100, NVSwitch Tech

Comments Filter:
  • by dryriver ( 1010635 ) on Tuesday March 27, 2018 @08:17PM (#56337983)
    The Nvidia V100 is a 15 TeraFlops capable GPU at 32 Bit accuracy, and half that at 64 Bit accuracy. You'd need a whopping 134 of these GPUs in a box with perfect parallelization between them to hit 2 TeraFlops for general GPGPU compute tasks. Nvidia claims that the TENSOR cores in a V100 deliver about 120 TeraFlops of MACHINE LEARNING performance. How they measured this is an open question - did they take a machine learning task that was 120 times faster than a 1 TeraFlop CPU with no AI optimization could do, and magically arrive at 120 TFLOPS? What AI tasks these TENSOR core TeraFlops can be used for is the next question. So for anyone thinking "I can get 2000 GPGPU TeraFlops in 1 box", sorry that isn't the case here. For specific AI tasks, this may be the machine to get. For general GPGPU, this thing is just a casing with a couple of 15 TFLOP GPUs crammed together.
    • A flop is a flop, no matter how many petas.

    • by Anonymous Coward

      The Nvidia V100 is a 15 TeraFlops capable GPU at 32 Bit accuracy, and half that at 64 Bit accuracy. You'd need a whopping 134 of these GPUs in a box with perfect parallelization between them to hit 2 TeraFlops for general GPGPU compute tasks. Nvidia claims that the TENSOR cores in a V100 deliver about 120 TeraFlops of MACHINE LEARNING performance. How they measured this is an open question - did they take a machine learning task that was 120 times faster than a 1 TeraFlop CPU with no AI optimization could do, and magically arrive at 120 TFLOPS? What AI tasks these TENSOR core TeraFlops can be used for is the next question. So for anyone thinking "I can get 2000 GPGPU TeraFlops in 1 box", sorry that isn't the case here. For specific AI tasks, this may be the machine to get. For general GPGPU, this thing is just a casing with a couple of 15 TFLOP GPUs crammed together.

      It is literally called an __AI__ supercomputer, the target market and intended purpose is deep learning, training, and inference. Tasks which make use of the tensorcores which are matrix-multiply-and-accumulate units.
      Sure the flop count is only on workloads making use of the tensorcores, but seeing as how that's the market for it anyway I see no problem.

  • 2 Petaflop? (Score:5, Funny)

    by cold fjord ( 826450 ) on Tuesday March 27, 2018 @08:21PM (#56337999)

    I thought that nobody needed more than 640 teraflops?

  • Ray trace at 8K?
  • is that the new enchanced version with the Robo-Cop routines to whack old homeless ladies pushing their cart or bicycle across the street....
  • Imagine a Beowulf cluster [slashdot.org] of these!

    Yeah, I'm showing my age. So what?

  • Can it play DOOM?

The use of money is all the advantage there is to having money. -- B. Franklin

Working...