Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
Graphics Hardware

NVIDIA Predicts 570x GPU Performance Boost 295

Posted by ScuttleMonkey
from the lets-talk-about-diminishing-returns dept.
Gianna Borgnine writes "NVIDIA is predicting that GPU performance is going to increase a whopping 570-fold in the next six years. According to TG Daily, NVIDIA CEO Jen-Hsun Huang made the prediction at this year's Hot Chips symposium. Huang claimed that while the performance of GPU silicon is heading for a monumental increase in the next six years — making it 570 times faster than the products available today — CPU technology will find itself lagging behind, increasing to a mere 3 times current performance levels. 'Huang also discussed a number of "real-world" GPU applications, including energy exploration, interactive ray tracing and CGI simulations.'"
This discussion has been archived. No new comments can be posted.

NVIDIA Predicts 570x GPU Performance Boost

Comments Filter:
  • But how? (Score:3, Insightful)

    by Anonymous Coward on Friday August 28, 2009 @04:24PM (#29236333)

    I read the article, but I don't see any explanation of how exactly that performance increase will come about. Nor is there any explanation of why GPUs will see the increase but CPUs will not. Anyone have a better article on the matter?

  • by TheRealMindChild (743925) on Friday August 28, 2009 @04:30PM (#29236425) Homepage Journal
    Well, it comes down to simple math. For the performance to get to 570-fold more than what it is now, in the same style package, either:
    1. The GPU has to become 570-fold more efficient
    2. The GPU has to become ~570-fold smaller so they can fit 570 of the things onto a card

    Both seem highly unlikely.

  • by eln (21727) on Friday August 28, 2009 @04:30PM (#29236429) Homepage
    I don't doubt the prediction at all, I just have concerns about the vat of liquid nitrogen I'm going to have to immerse my computer in to keep that thing from overheating, and the power substation I'm going to need to build in my backyard to power it.
  • Good to know! (Score:5, Insightful)

    by CopaceticOpus (965603) on Friday August 28, 2009 @04:30PM (#29236431)

    Thanks for the heads up, Nvidia! I'll be sure to hold off for 6 years on buying anything with a GPU.

  • by Rix (54095) on Friday August 28, 2009 @04:33PM (#29236469)

    He constantly runs his mouth without any real thought to what he's saying. It's just attention whoring.

  • by Entropius (188861) on Friday August 28, 2009 @04:39PM (#29236541)

    I do high-performance lattice QCD calculations as a grad student. At the moment I'm running code on 2048 Opteron cores, which is about typical for us -- I think the big jobs use 4096 sometimes. We soak up a *lot* of CPU time on some large machines -- hundreds of millions of core-hours -- so making this stuff run faster is something People Care About.

    This sort of problem is very well suited to being put on GPU's, since the simulations are done on a four-dimensional lattice (say 40x40x40x96 -- for technical reasons the time direction is elongated) and since "do this to the whole lattice" is something that can be parallelized easily. The trouble is that the GPU's don't have enough RAM to fit everything into memory (which is understandable, they're huge) and communications between multiple GPU's are slow (since we have to go GPU -> PCI Express -> Infiniband).

    If Nvidia were to make GPU's with extra RAM (could you stuff 16GB on a card?) or a way to connect them to each other by some faster method, they'd make a lot of scientists happy.

  • Re:But how? (Score:4, Insightful)

    by Spatial (1235392) on Friday August 28, 2009 @04:42PM (#29236579)
    It's Nvidia. Aren't they always saying things like this?

    It'll come about because BUY NVIDIA GPUS THEY ARE THE FUTURE, CPU SUX
  • by LoudMusic (199347) on Friday August 28, 2009 @04:50PM (#29236693)

    Well, it comes down to simple math. For the performance to get to 570-fold more than what it is now, in the same style package, either:

    1. The GPU has to become 570-fold more efficient
    2. The GPU has to become ~570-fold smaller so they can fit 570 of the things onto a card

    Both seem highly unlikely.

    You don't feel it could be a combination of both? Kind of like they did with multi-core CPUs? Make a single unit more powerful, then use more units ... wow!

    There is more than one way to skin a cat.

  • Re:haha yeah right (Score:3, Insightful)

    by PIBM (588930) on Friday August 28, 2009 @04:54PM (#29236743) Homepage

    Stupid I know, but I would have had more confidence in a 500x increase, just because there's less significant digits and a wider error margin.

  • Re:The math (Score:5, Insightful)

    by BikeHelmet (1437881) on Friday August 28, 2009 @05:13PM (#29236985) Journal

    So in six years, Gordon Moore says we should have 32x the performance we have now.

    No - 32x the transistors.

    You fail to predict how using those transistors in a more optimized way(more suitable to modern rendering algorithms) will affect performance.

    Just think about it - a plain old FPU and SSE4 might use the same number of transistors, but when the code needs to do a lot of fancy stuff at once, one is definitely faster.

    (inaccurate example, but you get the idea)

  • by Minwee (522556) <dcr@neverwhen.org> on Friday August 28, 2009 @05:16PM (#29237025) Homepage

    "Did I mention that our next model is going to be SO amazing that you'll think that our current product is crap? The new model will make EVERYTHING obsolete and the entire world will need to upgrade to it when it comes out. People won't even be able to give away any older products. Sooooo... how many of this year's model will you be buying today?

    "Hello? Are you still there?

    "Hello?"

  • by Anonymous Coward on Friday August 28, 2009 @05:29PM (#29237169)

    Doing multiple layers either via lamination or deposition would make sense. But then there's this problem: How do you get the heat out of it? Those things aren't exactly running cool as they are now.

    But then again, maybe they figured something out that we don't know.

  • by Ant P. (974313) on Friday August 28, 2009 @05:57PM (#29237485) Homepage

    1. The GPU has to become 570-fold more efficient
          2. The GPU has to become ~570-fold smaller so they can fit 570 of the things onto a card

    Both seem highly unlikely.

    If graphics card development in the last 10 years is anything to go by, nVidia's plan is that the GPU will become 570 times larger, draw 570 times more power and the fan will spin 570 times faster

  • by 7-Vodka (195504) on Friday August 28, 2009 @06:02PM (#29237555) Journal
    And their linux drivers still SUCK.
  • by melf-san (1504607) on Friday August 28, 2009 @06:39PM (#29237861)
    Maybe the high-end ones, but the low-end GPUs are mostly passively cooled and still much more powerful than old GPUs.
  • Re:haha yeah right (Score:3, Insightful)

    by Martin Blank (154261) on Friday August 28, 2009 @07:38PM (#29238351) Journal

    I'm not shorting Intel's capabilities, but the IEEE has some solid people in it, too -- many of whom work at Intel -- and they're very capable of recognizing the potential problems with process shrinks. The issues that come about at the sizes they're discussing involve quantum tunneling effects that would (as I understand it) interfere in accurate computing. There is also doubt that transistors can be made to work at all at sizes below 16nm because the mechanisms that might deal with quantum tunneling may bring about other deleterious effects that may be even more difficult to solve.

    I'm not saying that it's impossible, or that Intel is too optimistic. They know a lot more about it than I do. But these kinds of things do slip, and it's hard to predict advances of this sort so many years down the road.

  • by fractoid (1076465) on Friday August 28, 2009 @11:03PM (#29239591) Homepage
    WTF Mods. He's just saying that at this price point you can get nearly double the performance from ATI than from nVidia. I love nVidia too, I run a 9800GT, but I'm not going to mod someone troll for pointing out that something else is now faster and cheaper.
  • My question is... (Score:1, Insightful)

    by Anonymous Coward on Saturday August 29, 2009 @12:43AM (#29240077)

    how much will my electric bill go up?

    A GeForce GTX 260 that I just recently bought requires bare minimum of 650w power supply. With all the talk about going green lately, I'm wondering when or if we'll hit a ceiling for energy consumption in the home computing market.

  • by 91degrees (207121) on Saturday August 29, 2009 @02:18AM (#29240497) Journal
    And the second you have an "if" in GPU code, everything grinds to a halt. Conditions effectively break the GPU SIMD (single instruction multiple data) model and bring the pipeline to a halt.

    This isn't totally accurate. Generally conditions are handled by conditional writeback. You simply ignore the result if the test fails. You effectively have to perform both branches of a condition so there's a performance hit over a CPU there but "if(x 0){x = -x)" isn't going to hurt your performance.
  • by FooBarWidget (556006) on Saturday August 29, 2009 @03:15AM (#29240717)

    I agree. I recently bought a laptop with an ATI card and the biggest reason why I did that is because I heard they went open source. I was disappointed by the fact that their latest Catalyst driver doesn't work well on Ubuntu 9.04. The one recommended by Ubuntu works but it's VERY slow when restoring a window in Compiz. All in all it feels like a downgrade compared to my Intel integrated graphics card. Sigh. :(

  • by ceoyoyo (59147) on Saturday August 29, 2009 @08:36AM (#29242167)

    "It assumes that CPU processors only get 20% faster per year (compounded). That would only be true if they did not add more cores to the CPU."

    "It is only good at performing a large number of repetitive single precision (32 bit) floating point calculations without branching."

    If we wanted a 64-bit GPU it would be easy enough to make. GPUs used to do weird mixes of integer and floating point math until the manufacturers made an effort to guarantee 32-bit precision throughout. That leaves the branching part of your statement, which is the same for CPUs with multiple cores. A modern general purpose GPU (that is, one that CAN branch) is pretty similar to a many-cores CPU in those terms.

    The two are converging. CPUs are getting more general purpose and CPUs are getting more parallel.

  • Re:So... (Score:3, Insightful)

    by Hatta (162192) * on Saturday August 29, 2009 @11:16AM (#29243739) Journal

    All that resolution, and it still looks rendered. Instead of merely pushing more pixels, it would be nice if they did more to them, so it doesn't look so artificial.

"There is nothing new under the sun, but there are lots of old things we don't know yet." -Ambrose Bierce

Working...