Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Hardware Science

Moore's Law Will Die Without GPUs 250

Stoobalou writes "Nvidia's chief scientist, Bill Daly, has warned that the long-established Moore's Law is in danger of joining phlogiston theory on the list of superseded laws, unless the CPU business embraces parallel processing on a much broader scale."
This discussion has been archived. No new comments can be posted.

Moore's Law Will Die Without GPUs

Comments Filter:
  • Objectivity? (Score:5, Insightful)

    by WrongSizeGlass ( 838941 ) on Tuesday May 04, 2010 @09:41AM (#32084178)
    Dr. Daly believes the only way to continue to make great strides in computing performance is to ... offload some of the work onto GPU's that his company just happens to make? [Arte Johnson] Very interesting [wikipedia.org].

    The industry has moved away from "more horsepower than you'll ever need!" to "uses less power than you can ever imagine!" Perpetuating Moore's Law isn't an industry requirement, it's a prediction by a guy who was in the chip industry.
  • by iYk6 ( 1425255 ) on Tuesday May 04, 2010 @09:42AM (#32084190)

    So, a graphics card manufacturer says that graphics cards are the future? And this is news?

  • inevitable (Score:5, Insightful)

    by pastafazou ( 648001 ) on Tuesday May 04, 2010 @09:45AM (#32084228)
    considering that Moore's Law was based on the observation that they were able to double the number of transistors about every 20 months, it would be inevitable that at some point they reach a limiting factor. The factor seems to be the process size, which is a physical barrier. As the process size continues to decrease, the physical size of atoms is a barrier that they can't get past.
  • Umm? (Score:5, Insightful)

    by fuzzyfuzzyfungus ( 1223518 ) on Tuesday May 04, 2010 @09:52AM (#32084306) Journal
    Obviously "NVIDIA's Chief Scientist" is going to say something about the epochal importance of GPUs; but WTF?

    Moore's law, depending on the exact formulation you go with, posits either that transistor density will double roughly every two years or that density at minimum cost/transistor increases at roughly that rate.

    It is pretty much exclusively a prediction concerning IC fabrication(a business that NVIDIA isn't even in, TSMC handles all of their actual fabbing), without any reference to what those transistors are used for.

    Now, it is true that, unless parallel processing can be made to work usefully on a general basis, Moore's law will stop implying more powerful chips, and just start implying cheaper ones(since, if the limits of effective parallel processing mean that you get basically no performance improvements going from X billion transistors to 2X billion transistors, Moore's law will continue; but instead of shipping faster chips each generation, vendors will just ship smaller, cheaper ones).

    In the case of servers, of course, the amount of cleverness and fundamental CS development needed to make parallelism work is substantially lower, since, if you have an outfit with 10,000 apache instances, or 5,000 VMs or something, they will always be happy to have more cores per chip, since that means more apache instances for VMs per chip, which means fewer servers(or the same number of single/dual socket servers instead of much more expensive quad/octal socket servers) even if each instance/VM uses no parallelism at all, and just sits at one core = one instance.
  • Re:An observation (Score:3, Insightful)

    by maxume ( 22995 ) on Tuesday May 04, 2010 @09:53AM (#32084312)

    It is also a modestly self-fulfilling prediction, as planners have had it in mind as they were setting targets and research investments.

  • Re:An observation (Score:3, Insightful)

    by Pojut ( 1027544 ) on Tuesday May 04, 2010 @09:53AM (#32084314) Homepage

    Yeah, pretty much this. It's akin to the oil companies advertising the fact that you should use oil to heat your home...otherwise, you're wasting money!

  • by IBBoard ( 1128019 ) on Tuesday May 04, 2010 @10:07AM (#32084474) Homepage

    Moore's Law isn't exactly "a law". It isn't like "the law of gravity" where it is a certain thing that can't be ignored*. It's more "Moore's Observation" or "Moore's General Suggestion" or "Moore's Prediction". Any of those are only fit for a finite time and are bound to end.

    * Someone's bound to point out some weird branch of Physics that breaks whatever law I pick or says it is wrong, but hopefully gravity is quite safe!

  • by bmo ( 77928 ) on Tuesday May 04, 2010 @10:10AM (#32084516)

    Parallel processing *is* the way to go if we ever desire to solve the problem of AI.

    Human brains have a low clock speed, and each processor (neuron) is quite small, but there are a lot of them working at once.

    Just because he might be biased doesn't mean he's wrong.

    --
    BMO

  • by camg188 ( 932324 ) on Tuesday May 04, 2010 @10:31AM (#32084856)

    We either have more processors in the same space...

    Hence the need to embrace parallel processing. But the trend seems to be heading toward multiple low power RISC cores, not offloading processing to the video card.

  • Re:An observation (Score:4, Insightful)

    by hitmark ( 640295 ) on Tuesday May 04, 2010 @10:36AM (#32084928) Journal

    yep, the "law" basically results in one of two things, more performance for the same price, or same performance for cheaper price.

    thing is tho that all of IT is hitched on the higher margins the first option produces, and do not want to go the route of the second. The second however is what netbooks hinted at.

    The IT industry is used to be boutique pricing, but is rapidly dropping towards commodity.

  • by jwietelmann ( 1220240 ) on Tuesday May 04, 2010 @10:46AM (#32085088)
    Wake me up when this NVIDIA's proposed solution doesn't double my electrical bill and set my computer on fire.
  • What's a 'law'? (Score:3, Insightful)

    by MoellerPlesset2 ( 1419023 ) on Tuesday May 04, 2010 @11:00AM (#32085322)
    Well, no, Moore's Law was never passed by any legislative authority, no.

    As for a scientific law, 'laws' in science are like version numbers in software:
    There's no agreed-upon definition whatsoever, but for some reason, people still seem to attribute massive importance to them for some reason.

    If anything a 'law' is a scientific statement that dates from the 18th or 19th century, more or less.
    Hooke's law is an empirical approximation.
    The Ideal Gas law is exact, but only as a theoretical limit.
    Ohm's law is actually a definition (of resistance).
    The Laws of Thermodynamics are (likely) the most fundamental properties of nature that we know of.

    The only thing these have in common is that they're from before the 20th century, really.
  • Re:Objectivity? (Score:5, Insightful)

    by PitaBred ( 632671 ) <slashdot&pitabred,dyndns,org> on Tuesday May 04, 2010 @11:10AM (#32085500) Homepage

    If your CPU is running at 60%, you need more or faster memory, and faster main storage, not a faster CPU. The CPU is being starved for data. More parallel processing would mean that your CPU would be even more underutilized.

  • by Morgaine ( 4316 ) on Tuesday May 04, 2010 @11:10AM (#32085510)

    Perhaps nVidia's chief scientist wrote his piece because nVidia wants its very niche CUDA/OpenCL computational offering to expand and become mainstream. There's a problem with that though.

    The computational ecosystems that surround CPUs can't work with hidden, undocumented interfaces such as nVidia is used to producing for graphics. Compilers and related tools hit the user-mode hardware directly, while operating systems fully control every last register on CPUs at supervisor level. There is no room for nVidia's traditional GPU secrecy in this new computational area.

    I rather doubt that the company is going to change its stance on openness, so Dr. Daly's statement opens up the parallel computing arena very nicely to its traditional rival ATI, which under AMD's ownership is now a strongly committed open-source company.

  • Re:An observation (Score:3, Insightful)

    by Bakkster ( 1529253 ) <Bakkster@man.gmail@com> on Tuesday May 04, 2010 @11:26AM (#32085778)

    The law states that the number of transistors on a chip that you can buy for a fixed investment doubles every 18 months. CPUs remaining the same speed but dropping in price would continue to match this prediction as would things like SoCs gaining more domain-specific offload hardware (e.g. crypto accelerators).

    Actually, parallel processing is completely external to Moore's Law, which refers only to transistor quantity/size/cost, not what they are used for.

    So while he's right that for CPU makers to continue to realize performance benefits, parallel computing will probably need to become the norm, it doesn't depend upon nor support Moore's Law. We can continue to shrink transistor size, cost, and distance apart without using parallel computing; similarly by improving speed with multiple cores we neither depend upon nor ensure any improvement in transistor technology.

  • by Anonymous Coward on Tuesday May 04, 2010 @11:37AM (#32085962)

    Nine processors can't render an image of a baby in one system clock tick, sonny.

  • by clarkn0va ( 807617 ) <apt,get&gmail,com> on Tuesday May 04, 2010 @11:50AM (#32086156) Homepage

    The article has Dally advocating more efficient processing done in parallel. The potential benefits of this are obvious if you've compared the power consumption of a desktop computer decoding h.264@1080p in software (CPU) and in hardware (GPU). My own machine, for example, consumes less than 10W over idle (+16%) when playing 1080p, and ~30W over idle (+45%) using software decoding. And no fires. See also the phenomenon of CUDA and PS3s being used as mini supercomputers, again, presumably without catching on fire a lot.

    What was your point again?

  • by mathmathrevolution ( 813581 ) on Tuesday May 04, 2010 @12:10PM (#32086492)

    Obviously there's a conflict-of-interest here, but that doesn't mean the guy is necessarily wrong. It just means you should exercise skepticism and independent judgment.

    In my independent judgment, I happen to agree with the guy. Clockspeeds have been stalled at ~ 3Ghz for nearly a decade now. There are only so many ways of getting more per clock cycle and radical parallelization is a good answer. Many research communities, such as fluid dynamics, are already performing real computational work on the GPU, and the entire industry is shifting towards a GPGPU paradigm. Programming languages are also being written to further take advantage of parallelization. In my humble opinion, we're approaching the where every computation that can be processed in parallel will be. For what's it's worth, I actually think both Intel and AMD/ATI are doing a much better job at this than Nvidia.

  • by Surt ( 22457 ) on Tuesday May 04, 2010 @12:15PM (#32086586) Homepage Journal

    Parallel is a decently magic bullet. The number of interesting computing tasks I've seen that cannot be partitioned into parallel tasks has been quite small. That's why 100% of the top 500 supercomputers are parallel devices.

  • by Surt ( 22457 ) on Tuesday May 04, 2010 @02:05PM (#32088448) Homepage Journal

    Well, without the slowness of windows, we wouldn't need faster computers, so there'd be nothing driving innovation.

  • by Surt ( 22457 ) on Tuesday May 04, 2010 @02:39PM (#32088928) Homepage Journal

    I don't understand ... user interaction on the desktop uses typically 1% of a modern cpu.

    Virus scans are getting to be more cpu bound if you have an SSD.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...