Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Hardware Science

Moore's Law Will Die Without GPUs 250

Stoobalou writes "Nvidia's chief scientist, Bill Daly, has warned that the long-established Moore's Law is in danger of joining phlogiston theory on the list of superseded laws, unless the CPU business embraces parallel processing on a much broader scale."
This discussion has been archived. No new comments can be posted.

Moore's Law Will Die Without GPUs

Comments Filter:
  • An observation (Score:5, Informative)

    by Anonymous Coward on Tuesday May 04, 2010 @09:37AM (#32084140)

    Moore's is not a law, but an observation!

  • I am The Law (Score:5, Informative)

    by Mushdot ( 943219 ) on Tuesday May 04, 2010 @09:40AM (#32084166) Homepage

    I didn't realise Moore's Law was purely the driving force behind CPU development and not just an observation on semiconductor development. Surely we just say Moore's Law held until a certain point, then someone else's Law takes over?

    As for Phlogiston theory - it was just that, a theory which was debunked.

  • by 91degrees ( 207121 ) on Tuesday May 04, 2010 @09:44AM (#32084216) Journal
    But the only "law" is that the number of transistors doubles in a certain time (something of a self fulfilling prophesy these days since this is the yardstick the chip companies work to).

    Once transistors get below a certain size, of course it will end. Parallel or serial doesn't change things. We either have more processors in the same space, more complex processors or simply smaller processors. There's no "saving" to be done.
  • Re:An observation (Score:5, Informative)

    by TheRaven64 ( 641858 ) on Tuesday May 04, 2010 @09:58AM (#32084372) Journal
    It's also not in any danger. The law states that the number of transistors on a chip that you can buy for a fixed investment doubles every 18 months. CPUs remaining the same speed but dropping in price would continue to match this prediction as would things like SoCs gaining more domain-specific offload hardware (e.g. crypto accelerators).
  • by dpbsmith ( 263124 ) on Tuesday May 04, 2010 @10:06AM (#32084462) Homepage

    I'm probably being overly pedantic about this, but of course the word "law" in "Moore's Law" is almost tongue-in-cheek. There's no comparison between a simple observation that some trend or another is exponential--most trends are over a limited period of time--and a physical "law." Moore is not the first person to plot an economic trend on semilog paper.

    There isn't even any particular basis for calling Moore's Law anything more than an observation. New technologies will not automatically come into being in order to fulfill it. Perhaps you can call it an economic law--people will not bother to go through the disruption of buying a new computer unless it is 30% faster than the previous one, therefore successive product introductions will always be 30% faster, or something like that.

    In contrast, something like "Conway's Law"--"organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations"--may not be in the same category as Kepler's Laws, but it is more than an observation--it derives from an understanding of how people work in organizations.

    Moore's Law is barely in the same category as Bode's Law, which says that "the radius of the orbit of planet #N is 0.4 + 0.3 * 2^(N-1) astronomical units, if you call the asteroid belt a planet, pretend that 2^-1 is 0, and, of course, forget Pluto, which we now do anyway."

  • by wwfarch ( 1451799 ) on Tuesday May 04, 2010 @10:18AM (#32084628)
    Nonsense. We'll just build more universes
  • by Anonymous Coward on Tuesday May 04, 2010 @10:27AM (#32084782)

    Marketing guy?

    Before going to nvidia maybe two years ago, Bill Daly was a professor in (and the chairman of) the computer science department at Stanford. He's a fellow of the ACM, IEEE, an AAAS.

        http://cva.stanford.edu/billd_webpage_new.html

    You might criticize this position, but don't dismiss him as a marketing hack. NVidia managed to poach him from Stanford to become their chief scientist because he believed in the future of GPUs as a parallel processing tool, not that he began drinking the kool-aid because he had no other options.

  • by hitmark ( 640295 ) on Tuesday May 04, 2010 @10:47AM (#32085122) Journal

    but parallel is not a magic bullet. Unless one can chop the data worked on into independent parts that do not influence each other, or do so minimally, the task is still more or less linear and so will be done at core speed.

    the only benefit for most users is that one is more likely to be doing something while other, unrelated, tasks are done in the background. But if each task wants to do something with storage media, one is still sunk.

  • Infinite? (Score:3, Informative)

    by sean.peters ( 568334 ) on Tuesday May 04, 2010 @10:57AM (#32085270) Homepage

    The universe regresses infinitely towards smaller and smaller particles. Behind atoms we find electrons, behind electrons we find quarks.

    Dude, this is clearly some sense of the word "infinite" of which I haven't been previously aware. A couple things: 1) atoms -> electrons -> quarks is three levels, which is not exactly infinity. 2) I'm not sure if this is what you meant, but electrons are not made of quarks. They're truly elementary particles. 3) No one thinks there's anything below quarks - the Standard Model may have some issues, but no one seriously questions the elementary status of quarks. 4) you can't do anything with quarks anyway - practically speaking, you can't even see an individual quark. They're tightly bound to each other in the form of hadrons.

    I think that in practice, we're going to run into problems before we even get to the level of atoms. Lithographic processes can only get you so far - we're already into the extreme ultraviolet, so to get smaller features we're going to have start getting into x-rays/gamma rays, which have rather unfortunate health and safety issues associated with them, not to mention the difficult engineering problems involved in generating tightly focused beams. And even if you can solve that problem, you have to deal with noise introduced by electrons just leaking from one lead to another. I think 246 doublings is way, way generous.

  • by geekoid ( 135745 ) <dadinportland&yahoo,com> on Tuesday May 04, 2010 @11:45AM (#32086088) Homepage Journal

    He doesn't say that is should be done via the GPU.
    He says Intel and AMD need to focus on Parallelism. This is true.

    The GPU/CPU comment was driven by the author of the article. Clearly as an attempt to drum up some sort of flame war to drive hits to the article.
    Now, I would assume part of his job is to figure out how to properly do that with GPUs; however at no place is he implying only Nvidia can do this and it can only be dong on the GPU.

Always draw your curves, then plot your reading.

Working...