Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Graphics Supercomputing AMD Hardware Technology

NVIDIA and AMD Launch New High-End Workstation, Virtualization, and HPC GPUs 95

MojoKid writes "Nvidia is taking the wraps off a new GPU targeted at HPC and as expected, it's a monster. The Nvidia K20, based on the GK110 GPU, weighs in at 7.1B transistors, double the previous gen GK104's 3.54B. The GK110 is capable of pairing double-precision operations with other instructions (Fermi and GK104 couldn't) and the number of registers each thread can access has been quadrupled, from 63 to 255. Threads within a warp are now capable of sharing data. K20 also supports a greater number of atomic operations and brings new features to the table including Dynamic Parallelism. Meanwhile, AMD has announced a new FirePro graphics card at SC12 today, and it's aimed at server workloads and data center deployment. Rumors of a dual-core Radeon 7990 have floated around since before the HD 7000 series debuted, but this is the first time we've seen such a card in the wild. On paper, AMD's new FirePro S10000 is a serious beast. Single and double-precision rates at 5.9 TFLOPS and 1.48 TFLOPS respectively are higher than anything from Intel or Nvidia, as is the card's memory bandwidth. The flip side to these figures, however, is the eye-popping power draw. At 375W, the S10000 needs a pair of eight-pin PSU connectors. The S10000 is aimed at the virtualization market with its dual-GPUs on a single-card offering a good way to improve GPU virtualization density inside a single server." My entire computer uses less power than one of these cards.
This discussion has been archived. No new comments can be posted.

NVIDIA and AMD Launch New High-End Workstation, Virtualization, and HPC GPUs

Comments Filter:
  • by symbolset ( 646467 ) * on Tuesday November 13, 2012 @12:24AM (#41964361) Journal
    Here is the spin from El Reg [theregister.co.uk].
  • Well, it's good to see AMD keeping with tradition, a blessing for those in the frozen north..

  • Beowolf these mofos.
    • Beowolf these mofos.

      That's so last century.

      These days it's, "imagine a Bitcoin mining rig of these"

      Doesn't quite have the same cadence, though,

  • by mozumder ( 178398 ) on Tuesday November 13, 2012 @12:29AM (#41964397)

    3.6 kilowatts, 16 GPUs:
    http://fireuser.com/blog/8_amd_firepro_s10000s_16_gpus_achieve_8_tflops_real_world_double_precision_/ [fireuser.com]

    and to think all this comes from video games.

    • Just look at those 8 cards crammed into that case! What are they called again? Oh yeah, FirePro!
    • "and to think all this comes from video games."

      The word "video games" just hides the fact that videogames are SIMULATIONS/Models (although simplified) of some aspect of the world. No one would be surprised since videogames are basically alternative world simulations and we're heading towards a time (eventually over the long term) where extremely complex behavior will be simulated.

    • by knarf ( 34928 ) on Tuesday November 13, 2012 @03:00AM (#41965005)

      Those 8 TFLOPS would have landed it somewhere at the top of the #500 supercomputer performance list in November, 2011 [top500.org]. ASCI White [top500.org] used 8192 375MHz Power3 cores to achieve this performance. It took up a fair bit of space [energy.gov] and used 3 MW to run the machine with a further 3 MW needed for cooling. It had a theoretical processing speed of 12.3 teraflops.

      • by knarf ( 34928 ) on Tuesday November 13, 2012 @03:02AM (#41965017)

        Of course that should read 'November 2001', not 'November 2011'...

      • GPUs are powerful because they are more limited than CPUs. They are very good at what they do, but not as good at general operations. So, you can find things they are exceedingly good at, and thus way faster than CPUs, but also things they are bad at, and thus way slower.

        1 TFlop on a video card isn't the same as a TFlop on a CPU in terms of the things you can do. A simple example of the limits are memory, the GPU relies on very fast local memory to do its work, but it is small, relatively speaking, under 10

  • 375 W (Score:5, Insightful)

    by timeOday ( 582209 ) on Tuesday November 13, 2012 @02:53AM (#41964995)
    Here is a dumb thing to say:

    My entire computer uses less power than one of these cards.

    Does the person who wrote this know how much a TFLOP actually is, let alone 5.9 TFLOPS (single precision) and 1.48 TFLOPS (double)? As an example, an Intel Core i7 980 XE does 109 GFLOPS double-precision. This is over 13 times that! It is really exciting to see the power of GPUs broadened to scientific computing in general. I doubt these cards would be cost-effective or are really intended for gaming.

    • There is probably someone making a mod on some first person shooter to run it at 4 quadHD screens powered by an array of these cards. A couple of posts up someone already posted images of a system with 8 of these cards in it (because 1 wouldn't be awesome enough).
      • Ironically, I don't think you can. At least, not at all easily.

        The FirePro cards shows don't support multiple GPUs driving a single output. So you'd have to find some way to avoid treating the array of screens as a single screen, which is far easier said than done. At the very least, extensive reworking of the game's renderer seems in order, and any game open-source enough for a modder to do that is old or graphically-limited enough that you could drive quad QHD monitors using my laptop.

        The Tesla and Xeon P

    • Does the person who wrote this know how much a TFLOP actually is, let alone 5.9 TFLOPS (single precision) and 1.48 TFLOPS (double)? As an example, an Intel Core i7 980 XE does 109 GFLOPS double-precision. This is over 13 times that!

      By way of comparison, the Opteron 6174 can hit a bit over 180GFlops in LINPACK. On a shared memory machine multi socket machine, the efficiency is very high, so a quad socket Opteron 6100 box would probably manage in the region of 700GFlops.

      Anyway, plugging the numbers, the 6174

    • Does the person even know that computer components are not sitting there consuming their tdp all the time?

      The tdp is just a theoretical number that represents the maximum amount of heat energy the package is capable of dissipating without being damaged. It is NOT the amount of power the device will consume in normal operation. In practice, you will almost NEVER see a CPU or GPU consuming its tdp.

      • I don't know hey. Given that these things dynamically overclock parts to when there is thermal/power headroom available, I would guess, yes, you would see it using tdp. Most processors under the right load should see tdp or close to it in the right loads. It might be tdp minus 2% or 3%, but close enough. If you have better cooling or a lower ambient, you might run into the power limit, which is generally set close to the rated tdp. But it all depends what your load is.

    • This. Remember the Cray-1 supercomputer? $5-10 million, 100KW power draw. The standard against which other systems were compared through the early 80's.

      This card provides more than ten thousand times the computational power, using less than one-half percent the electrical power.

      The future really is a cool place.

  • With the adoption of GPU-based rendering in 3D graphics workstations for the entertainment industry it's great to see developments like that.

  • by q043x ( 256014 )
    I just, with this article, realise that my understanding of computer science may not be keeping up with reality. I am now old. Why would someone want to calculate zillions of floating point results on their PC? How can a GPU card have no graphical interface? And when did we start using Teraflops in desktops!??!?!
    • Your GPU card does more than just output video. The GPUs on these cards are designed for brute force calculations and chugging through numbers. They're designed for physics engines, running through protein folding calculations, and rendering high quality video in real time.

      If you want to equate them to a design in the past, think of them as really really really powerful math co-processors. CPUs are designed for short command queues and calculations that are hard to predict the next step, while GPUs are d

  • by hjf ( 703092 ) on Tuesday November 13, 2012 @07:55AM (#41966193) Homepage

    These things are regulated. Argentina tried to buy a few (5!) of their previous line, I think it was Tesla. The US government wouldn't allow it. Guess you have to be a NATO member to buy these.

You know you've landed gear-up when it takes full power to taxi.

Working...