Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel, NVIDIA Take Shots At CPU vs. GPU Performance 129

MojoKid writes "In the past, NVIDIA has made many claims of how porting various types of applications to run on GPUs instead of CPUs can tremendously improve performance — by anywhere from 10x to 500x. Intel has remained relatively quiet on the issue until recently. The two companies fired shots this week in a pre-Independence Day fireworks show. The recent announcement that Intel's Larrabee core has been re-purposed as an HPC/scientific computing solution may be partially responsible for Intel ramping up an offensive against NVIDIA's claims regarding GPU computing."
This discussion has been archived. No new comments can be posted.

Intel, NVIDIA Take Shots At CPU vs. GPU Performance

Comments Filter:
  • You lazy fuckers (Score:5, Interesting)

    by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Sunday June 27, 2010 @08:46AM (#32708372) Homepage Journal

    I don't expect slashdot "editors" to actually edit, but could you at least link to the most applicable past story on the subject [slashdot.org]? It's almost like you people don't care if slashdot appears at all competent. Snicker.

  • by leptogenesis ( 1305483 ) on Sunday June 27, 2010 @09:04AM (#32708440)
    At least as far as parallel computing goes. CPUs have been designed for decades to handle sequential problems, where each new computation is likely to have dependencies on the results of recent computations. GPUs, on the other hand, are designed for situations where most of the operations happen on huge vectors of data; the reason they work well isn't really that they have many cores, but that the operations for splitting up the data and distributing it to the cores is (supposedly) done in hardware. In a CPU, the programmer has to deal with splitting up the data, and allowing the programmer to control that process makes many hardware optimizations impossible.

    The surprising thing in TFA is that Intel is claiming to have done almost as well on a problem that NVIDIA used to tout their GPUs. It really makes me wonder what problem it was. The claim that "performance on both CPUs and GPUs is limited by memory bandwidth" seems particularly suspect, since on a good GPU the memory access should be parallelized.

    It's clear that Intel wants a piece of the growing CUDA userbase, but I think it will be a while before any x86 processor can compete with a GPU on the problems that a GPU's architecture was specifically designed to address.
  • Re:It depends? (Score:1, Interesting)

    by Anonymous Coward on Sunday June 27, 2010 @09:45AM (#32708608)

    That is an excellent post, with the exception of this little bit

    GPUs have extremely fast RAM connected to them, much faster than even system RAM

    I'd like to see a citation for that little bit of trivia... the specific type & speed of RAM on a board with a GPU varies based on model and manufacturer. Cheaper boards use slower RAM, the more expensive ones use higher end stuff. I haven't seen ANY GPU's that came with on-board RAM that is any different than what you can mount as normal system RAM, however.

    Not trolling, I wanted to point out a serious flaw in what in an otherwise great post.

  • Re:It depends? (Score:3, Interesting)

    by somenickname ( 1270442 ) on Sunday June 27, 2010 @10:12AM (#32708716)

    That's a very good breakdown of what you need to benefit from GPU based computing but, really, only #1 has any relevance vs. an x86 chip.

    #2) Yes, an x86 chip will have a high clock speed but, unless you can use SSE instructions, x86 is crazy slow. Also, most (if not all) architectures will give you half the flops for using the double precision vector instructions vs. the single precision ones.

    #3) This is a problem with CPUs as well except, as you point out, the memory is much slower. Performance is often about hiding latency. You don't need your problem to fit in the L2/L3 cache of a CPU, but, if the compiler/programmer/CPU can prefetch things into L2/L3 before it's accessed, it's a huge win. The same goes for having things in GPU memory before it's needed. The difference is that the GPU has a TON of memory compared to an L2/L3 cache.

    #4) You might be right here. I know that with hyperthreading a CPU will yield to another "thread" when it mispredicts a branch. However, the fact that branch misprediction is a condition in which the CPU will switch to another thread, to me, means that mispredicting a branch on an x86 CPU is also a fairly expensive thing to do. Maybe not as expensive as on a GPU but, expensive nonetheless.

    I suppose it all comes down to what kind of problem you are trying to compute but, if you can make your problem work in a way that is pleasing to #1, using a GPU is probably going to be a win.

  • Re:It depends? (Score:3, Interesting)

    by Spatial ( 1235392 ) on Sunday June 27, 2010 @10:39AM (#32708846)

    I haven't seen ANY GPU's that came with on-board RAM that is any different than what you can mount as normal system RAM, however.

    You haven't been looking very hard. Most GPUs have GDDR3 or GDDR5 running at very high frequencies.

    My system for example:
    Main memory: DDR2 400Mhz, 64-bit bus. 6,400 MB/sec max.
    GPU memory: GDDR3 1050Mhz, 448-bit bus. 117,600 MB/sec max.

    Maybe double the DDR2 figure since it's in dual-channel mode. I'm not sure, but it hardly makes much of a difference in contrast. :)

    That isn't even exceptional by the way. I have a fairly mainstream GPU, the GTX 260 c216. High-end cards like the HD5870 and GTX 480 are capable of pushing more than 158,000 and 177,000 MB/sec respectively.

With your bare hands?!?

Working...