Forgot your password?
typodupeerror
Intel Hardware

Intel, NVIDIA Take Shots At CPU vs. GPU Performance 129

Posted by kdawson
from the army-boots dept.
MojoKid writes "In the past, NVIDIA has made many claims of how porting various types of applications to run on GPUs instead of CPUs can tremendously improve performance — by anywhere from 10x to 500x. Intel has remained relatively quiet on the issue until recently. The two companies fired shots this week in a pre-Independence Day fireworks show. The recent announcement that Intel's Larrabee core has been re-purposed as an HPC/scientific computing solution may be partially responsible for Intel ramping up an offensive against NVIDIA's claims regarding GPU computing."
This discussion has been archived. No new comments can be posted.

Intel, NVIDIA Take Shots At CPU vs. GPU Performance

Comments Filter:
  • It depends? (Score:5, Insightful)

    by aliquis (678370) <dospam@gmail.com> on Sunday June 27, 2010 @08:23AM (#32708284) Homepage

    Isn't it like saying "Ferrari makes the fastest tractors!" (yeah, I know!), which may be true, as long as they can actually carry out the things you want to do.

    I don't know about the limits of OpenCL/GPU-code (or architecture compared to regular CPUs/AMD64 functions, registers, cache, pipelines, what not), but I'm sure there's plenty and that someone will tell us.

  • by Posting=!Working (197779) on Sunday June 27, 2010 @09:13AM (#32708482)

    What the hell kind of sales pitch is "We're only a little more than twice as slow!"

    [W]e perform a rigorous performance analysis and find that after applying optimizations appropriate for both CPUs and GPUs the performance gap between an Nvidia GTX280 processor and the Intel Core i7 960 processor narrows to only 2.5x on average.

    It's gonna work, too.

    Humanity sucks at math.

  • by Junta (36770) on Sunday June 27, 2010 @09:25AM (#32708520)

    On top of being highly capable at massively parallel floating point math (the bread and butter of top500 and most all real world HPC applications), GPU chips benefit from economies of scale by having a much larger market to sell chips to. If Intel has an HPC-only processor, I don't see it really surviving. There have been numerous HPC only accelerators that provided huge boosts over cpus that flopped. GPUs growing into that capability is the first large scale phenomenon in hpc with legs.

  • by Overzeetop (214511) on Sunday June 27, 2010 @09:39AM (#32708574) Journal

    Two things: you've been conditioned to accept gaming graphics of yesteryear, and your need for more complex game play now trumps pure visuals. You can drop in a $100 video card, set the quality to give you excellent frame rates, and it looks fucking awesome because you remember playing Doom. Also, once you get to a certain point, the eye candy takes a backseat to game play and story - the basic cards hit that point pretty easily now.

    Back when we used to game, you needed just about every cycle you could get to make basic gameplay what would now be considered "primitive". Middling level detail is great, in my opinion. Going up levels to the maximum detail really adds very little. I won't argue that it's cool to see that last bit of realism, but it's not worth doubling the cost of a computer to get it.

  • Re:It depends? (Score:5, Insightful)

    by rahvin112 (446269) on Sunday June 27, 2010 @09:43AM (#32708598)

    It is not a secret (it's a stated fact on both Intel and AMD's roadmaps) to integrate GPU like programmable FP into the FP units of the general processor. The likely result will be the same general purpose CPU you love, but there will be dozens of additional FP units that excel at mathematics like the parent described except more flexible. When the fusion'eske products ramp and GPGPU functionality is integrated into the CPU Nvidia is out of business. Oh I don't expect these fusion products to have great GPU's, but once you destroy the low end and mid range graphics marketplace there is very little $$ wise left to fund R&D (3dfx was the first one into the high end 3d market and they barely broke even on their first sales, the only reason they survived was because they were heavy in the arcade sector sales). If Nvidia hasn't been allowed to purchase Via's x86 license by that point they are quite frankly out of business. Not immediately of course, they will spend a few years evaporating all assets while they try to compete with only the highend marketplace but in the end they won't survive. Things go in cycles and the independent graphics chip cycle is going to end very shortly, maybe in a decade it will come back, but I'm skeptical. CPU's have exceeded the speed needed for 80% of most tasks out there.

    When I first started my Career computer runs of my design work took about 5-30 minutes to run on bare minimum quality. These days I can exceed that bare minimum by 20 times and the run will take seconds. It's to the point where I can model with far more precision than the end product needs with almost no time penalty. In fact additional CPU speed at this point is almost meaningless and my business isn't alone in this. In fact most of the software in my business is single threaded (and the apps run that fast with single threads). Once the software is multi-threaded there is really no additional CPU power needed and it may come to the point where my business just stops upgrading hardware beyond what's need to replace failures and my business isn't alone. I just don't see a future for independent graphics chip/card producers.

  • Re:AMD (Score:5, Insightful)

    by Junta (36770) on Sunday June 27, 2010 @10:04AM (#32708678)

    AMD is the most advantaged on this front...

    Intel and nVidia are stuck in the mode of realistically needing one another and simultaneously downplaying the other's contribution.

    AMD can use what's best for the task at hand/accurately portray the relative importance of their CPUs/GPUs without undermining their marketing message.

  • by werewolf1031 (869837) on Sunday June 27, 2010 @10:28AM (#32708792)
    Just kiss and make up already. Intel and nVidia have but one choice: to join forces and try collectively to compete against AMD/ATI. Anything less, and they're cutting their nose off to spite their respective faces.
  • by jedidiah (1196) on Sunday June 27, 2010 @10:44AM (#32708868) Homepage

    Yeah, speciality silicon for a small subset of problems will stomp all over a general purpose CPU. No big news there.

    Why is Intel even bothering to whine about this stuff? They sound like a bunch of babies trying to argue that the sky isn't blue.

    This makes Intel look truely sad. It's completely unecessary.

  • by chriso11 (254041) on Sunday June 27, 2010 @12:20PM (#32709300) Journal

    The reason that Intel is whining is in the context of large number crunching systems or high end workstations. Rather than sell Ks of chips for the former, Nvidia (and to a lesser extent AMD) gets to sell hundreds of GPU chips. And for the workstations, Intel sells only one chip instead of a 2 to 4.

  • Re:AMD (Score:3, Insightful)

    by Joce640k (829181) on Sunday June 27, 2010 @02:14PM (#32710056) Homepage

    I don't think AMD really cares about competing with top-end Intel processors. It takes a lot of R&D investment with very little return (it's a tiny market segment)

    In the low/mid range AMD rules the roost in terms of value for money.

There is never time to do it right, but always time to do it over.

Working...