Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
IBM Graphics Hardware

NVIDIA Announces Tesla K40 GPU Accelerator and IBM Partnership In Supercomputing 59

MojoKid writes "The supercomputing conference SC13 kicks off this week and Nvidia is kicking off their own event with the launch of a new GPU and a strategic partnership with IBM. Just as the GTX 780 Ti was the full consumer implementation of the GK110 GPU, the new K40 Tesla card is the supercomputing / HPC variant of the same core architecture. The K40 picks up additional clock headroom and implements the same variable clock speed threshold that has characterized Nvidia's consumer cards for the past year, for a significant overall boost in performance. The other major shift between Nvidia's previous gen K20X and the new K40 is the amount of on-board RAM. K40 packs a full 12GB and clocks it modestly higher to boot. That's important because datasets are typically limited to on-board GPU memory (at least, if you want to work with any kind of speed). Finally, IBM and Nvidia announced a partnership to combine Tesla GPUs and Power CPUs for OpenPOWER solutions. The goal is to push the new Tesla cards as workload accelerators for specific datacenter tasks. According to Nvidia's release, Tesla GPUs will ship alongside Power8 CPUs, which are currently scheduled for a mid-2014 release date. IBM's venerable architecture is expected to target a 4GHz clock speed and offer up to 12 cores with 96MB of shared L3 cache. A 12-core implementation would be capable of handling up to 96 simultaneous threads. The two should make for a potent combination."
This discussion has been archived. No new comments can be posted.

NVIDIA Announces Tesla K40 GPU Accelerator and IBM Partnership In Supercomputing

Comments Filter:
  • by fuzzyfuzzyfungus ( 1223518 ) on Monday November 18, 2013 @02:02PM (#45456021) Journal
    "Mantle", at least according to the press puffery, is aimed at being an alternative to OpenGL/Direct3d, akin to 3DFX's old "Glide"; but for AMD gear.

    CUDA vs. OpenCL seems to be an example of the ongoing battle between an entrenched and supported; but costly, proprietary implementation, vs. a somewhat patchy solution that isn't as mature; but has basically everybody except Nvidia rooting for it.

    "Mantle", like 'Glide' before it, seems to be the eternal story of the cyclical move between high-performance/low-complexity(but low compatibility) minimally abstracted approaches, and highly complex, highly abstracted; but highly portable/compatible approaches. At present, since AMD is doing the GPU silicon for both consoles and a nontrivial percentage of PCs, it makes a fair amount of sense for them to offer a 'Hey, close to the metal!' solution that takes some of the heat off their drivers, makes performance on their hardware better, and so forth. If, five years from now, people are swearing at 'Mantle Wrappers' and trying to find the one magic incantation that actually causes them to emit non-broken OpenGL, though, history will say 'I told you so'.
  • by FreonTrip ( 694097 ) <`freontrip' `at' `gmail.com'> on Monday November 18, 2013 @02:41PM (#45456379)
    I wouldn't say that's strictly true - Mavericks implements OpenCL 1.2 support pervasively, even down to the rinky-dink Intel GPUs that can handle it.
  • Re:DRAM bandwidth (Score:3, Informative)

    by Anonymous Coward on Monday November 18, 2013 @03:21PM (#45456725)

    NVIDIA seems behind AMD in moving to 512-bit wide GDDR5: this K40 still has 384-bit.

    Right now memory bus width is a die size tradeoff. NVIDIA can get GK110's memory controller up to 7Gbps (GTX 780 Ti), which on a 384-bit bus makes for 336GB/sec, but relatively speaking it's a big honking memory controller. AMD's 512-bit memory controller in Hawaii isn't designed to clock up nearly as high, topping out at 5Gbps, or 320GB/sec. But it's designed to be particularly small, smaller than even AMD's old 384-bit memory controller on Tahiti.

    So despite NVIDIA's narrower bus, they actually have more available memory bandwidth than AMD does. It's not a huge difference, but it's a good reminder of the fact that there are multiple ways to pursue additional memory bandwidth.

The use of money is all the advantage there is to having money. -- B. Franklin

Working...