Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Graphics Hardware

NVIDIA Predicts 570x GPU Performance Boost 295

Gianna Borgnine writes "NVIDIA is predicting that GPU performance is going to increase a whopping 570-fold in the next six years. According to TG Daily, NVIDIA CEO Jen-Hsun Huang made the prediction at this year's Hot Chips symposium. Huang claimed that while the performance of GPU silicon is heading for a monumental increase in the next six years — making it 570 times faster than the products available today — CPU technology will find itself lagging behind, increasing to a mere 3 times current performance levels. 'Huang also discussed a number of "real-world" GPU applications, including energy exploration, interactive ray tracing and CGI simulations.'"
This discussion has been archived. No new comments can be posted.

NVIDIA Predicts 570x GPU Performance Boost

Comments Filter:
  • In other news... (Score:5, Informative)

    by Hadlock ( 143607 ) on Friday August 28, 2009 @05:22PM (#29236317) Homepage Journal

    In other news, ATI is selling their 4870 series cards for $130 on newegg, which are twice as fast as an Nvidia 9800GTS which is the same price (at least on Left 4 Dead, Call of Duty, and any other game that matters). ATI is blowing Nvidia out of the water in terms of performance per dollar and will continue to do so through at least the middle of next year. See here:

    http://www.tomshardware.com/charts/gaming-graphics-cards-charts-2009-high-quality/benchmarks,62.html [tomshardware.com]

    Yeah, I'd be making outrageous statements too if I were Nvidia.

  • Re:In other news... (Score:2, Informative)

    by Hadlock ( 143607 ) on Friday August 28, 2009 @05:27PM (#29236373) Homepage Journal

    Here's the L4D comparo, sorry for the wrong link:

    http://www.tomshardware.com/charts/gaming-graphics-cards-charts-2009-high-quality/Left4Dead,1455.html [tomshardware.com]

    The 9800GT and 8800GT are both in the 40-60fps while the 4870 (single processor) is in the 106fps range. It's a pretty staggering difference.

  • by Anonymous Coward on Friday August 28, 2009 @05:39PM (#29236547)

    The prediction is complete nonsense. It assumes that CPU processors only get 20% faster per year (compounded). That would only be true if they did not add more cores to the CPU. And finally GPUs are hitting the same thermal/power leakage wall that CPUs hit several years ago - they will at best get faster in lock step with CPUs.

    A GPU is not a general purpose processor, as is a CPU. It is only good at performing a large number of repetitive single precision (32 bit) floating point calculations without branching. Double precision (64 bit) calculations - double in C speak - is 4 times slower than single precision on a GPU. And the second you have an "if" in GPU code, everything grinds to a halt. Conditions effectively break the GPU SIMD (single instruction multiple data) model and bring the pipeline to a halt.

  • Re:haha yeah right (Score:3, Informative)

    by Martin Blank ( 154261 ) on Friday August 28, 2009 @05:41PM (#29236569) Homepage Journal

    The IEEE figures that semiconductor tech will be at the 11nm level around 2022. Intel and Nvidia both claim that they'll be significantly further along the path than the IEEE's roadmap. Maybe they're right, and I hope they are, but there are some very significant problems that appear as the process shrinks to that level.

  • Re:In other news... (Score:3, Informative)

    by Spatial ( 1235392 ) on Friday August 28, 2009 @05:53PM (#29236733)
    Troll mod? No, this is mostly true.

    While his example is wrong (Nvidia's competitor to the HD4870 is the GTX 260 c216), AMD do have better value for money on their side. The HD4870 is evenly matched but a good bit cheaper.

    The situation is similar in the CPU domain. The Phenom IIs are slightly slower per-clock than the Core 2s they compete with, but are considerably cheaper.
  • by Anonymous Coward on Friday August 28, 2009 @05:56PM (#29236773)

    Even generously assuming they'd achieve an 8x die shrink, that'd need to be producing chips with a 41000mm^2 die area. (Their current chips are already the biggest at 576mm^2.)

  • by volsung ( 378 ) <stan@mtrr.org> on Friday August 28, 2009 @06:21PM (#29237081)

    The GeForce 9 series was a rebrand/die shrink of GeForce 8, but the GTX 200 series has some major improvements under the hood:

    * Vastly smarter memory controller including better batching of reads, and the ability to map host memory into the GPU memory space
    * Double the number of registers
    * Hardware double precision support (not as fast as single, but way faster than emulating it)

    These sorts of things probably don't matter to people playing games, but they are huge wins for people doing GPU computing. The GTX 200 series has also seen a minor die shrink during the generation, so I don't know if the next generation will be more of a die shrink or actually include improved performance. (Hopefully the latter to keep up with Larrabee.)

  • Re:In other news... (Score:4, Informative)

    by MrBandersnatch ( 544818 ) on Friday August 28, 2009 @06:24PM (#29237113)

    Depending on vendor it is now possible to get a 275 less than a 4890 and a 260 for only slightly more than a 4870; at lower prices its very competitive too. My point is that both NV and ATI are on pretty level ground again and the ONLY reason I now choose NV over ATI is because of the superior NV drivers (both Linux and Windows side)...oh and the fact that ATI pulled a fast one on me with their AVIVO performance claims. Shame on you ATI!

  • by Anonymous Coward on Friday August 28, 2009 @06:27PM (#29237151)

    Simple. GPUs are already massively parallell in a way that is actually usable. While the gigahertz-race has pretty much stopped, the transistor-race is still on, but getting the most out of a multi core CPU for a single application is non-trivial. GPUs however are a completely different ballgame, where the performance of the card pretty much scales with the number of shader cores.

  • by TeXMaster ( 593524 ) on Friday August 28, 2009 @06:35PM (#29237225)

    You mean like the Tesla [nvidia.com]?

    No, that won't do. The NVIDIA architecture (which is shared between Tesla and graphic cards) is 32-bit, meaning that it can only flat-address 4GB of RAM tops. The more sophisticated Tesla solutions are essentially built from clusters of Tesla cards, each with its own 4GB of RAM tops. Separate memory spaces means expensive memory transfers to share data between the cards, which is not an issue if you can get good domain decomposition, but is a BIG issue if you cannot.

    The revolution for HPC on GPUs would be a 64-bit GPU architecture.

    Proper support for doubles and possibly even long doubles would be a plus, for applications that need it.

  • by tyrione ( 134248 ) on Friday August 28, 2009 @07:45PM (#29237915) Homepage

    Well, it comes down to simple math. For the performance to get to 570-fold more than what it is now, in the same style package, either:

    1. The GPU has to become 570-fold more efficient
    2. The GPU has to become ~570-fold smaller so they can fit 570 of the things onto a card

    Both seem highly unlikely.

    It's not a linear relationship.

  • by raftpeople ( 844215 ) on Friday August 28, 2009 @08:28PM (#29238271)
    "GPUs however are a completely different ballgame, where the performance of the card pretty much scales with the number of shader cores."

    But only to the degree that your problem maps to that level of parallelization. There are many problems that do not perform well on the GPU.
  • Re:In other news... (Score:3, Informative)

    by raftpeople ( 844215 ) on Friday August 28, 2009 @08:37PM (#29238345)
    >> provides features you can only appreciate on a 120hz display

    I enjoy the following features of my GTX280 (used for calcs not games):
    CUDA (I compile C code, throw in a couple of lines of stuff for the GPU and it runs on the GPU, easy)
    Hardware optimizes my memory accesses and at times branchy code so the GPU is doing as much work as possible (makes it easy to get good results on the GPU)
  • by Anonymous Coward on Friday August 28, 2009 @09:28PM (#29238643)

    But that's why there's a specific problem right in the name.

  • Re:% VS Times (Score:5, Informative)

    by glwtta ( 532858 ) on Friday August 28, 2009 @09:48PM (#29238737) Homepage
    For the rest of us of course 570% increase is 5.7X faster.

    It seems the rest of us don't understand what a "percent increase" means, either.

    (hint: 570% increase == 6.7X)
  • Re:% VS Times (Score:3, Informative)

    by Wildclaw ( 15718 ) on Saturday August 29, 2009 @05:43AM (#29241011)

    Where is my math wrong?

    This isn't about math (well maybe a little) as much as it is about wording. Basically, the difference between "as fast/increasing/the speed" and "faster/increase". The first is a multiplicative action, while the second is an additive. So if you say, 100% as fast, you are basically saying a * 100% = a. While if you are saying 100% faster you are saying a + a*100% = 2*a. Now looking at the posts. You started by this assertion.

    seeing 570% increase and going

    OK. A 570% Increase. That would be a+a*570% = 6.7*a.

    570% increase is 5.7X faster.

    increase/faster. Good.

    GPUs increasing 5.7X

    Ouch. Here is the mistake. Using increasing. Suddenly you have downgraded the new anticipated speed from 6.7*a to a*5.7=5.7*a.

    Now, in your second post you used the multiplicative action straight through. But that is pretty much the opposite of the first post that used the additive action except for a single time at the end.

    Also, as a general advice, use X (times) only to represent a multiplicative action. That is what the general meaning of the word is, so it can be confusing when you here it describing an additive action. Not really wrong, but confusing.

I've noticed several design suggestions in your code.

Working...