Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Hardware Science

Moore's Law Will Die Without GPUs 250

Stoobalou writes "Nvidia's chief scientist, Bill Daly, has warned that the long-established Moore's Law is in danger of joining phlogiston theory on the list of superseded laws, unless the CPU business embraces parallel processing on a much broader scale."
This discussion has been archived. No new comments can be posted.

Moore's Law Will Die Without GPUs

Comments Filter:
  • by nedlohs ( 1335013 ) on Tuesday May 04, 2010 @09:54AM (#32084322)

    Guy at company that does nothing but parallel processing says that parallel processing is the way to go.

    Moore's law has to stop at some point. It's an exponential function after all. Currently we are at in the 10^6 range (2,000,000 or so), our lower estimates for atoms in the universe are 10^80.

    (80 - 6) * (log(10)/log(2)) = 246.

    So clearly we are going to reach some issues with this doubling thing in sometime in the next 246 more doubles...

  • Re:Objectivity? (Score:5, Interesting)

    by Eccles ( 932 ) on Tuesday May 04, 2010 @10:00AM (#32084390) Journal

    The industry has moved away from "more horsepower than you'll ever need!" to "uses less power than you can ever imagine!"

    As someone who still spends way too much time waiting for computers to finish tasks, I think there's still room for both. What we really want is CPUs that are lightning-fast and likely multi-parallel (and not necessarily low-power) for brief bursts of time, and low-power the rest of the time.

    My CPU load (3Ghz Core 2 Duo) is at 60% right now thanks to a build running in the background. More power, Scotty!

  • Re:inevitable (Score:4, Interesting)

    by Junior J. Junior III ( 192702 ) on Tuesday May 04, 2010 @10:00AM (#32084392) Homepage

    At some point, they'll realize that instead of making the die features smaller, they can make the die larger. Or three-dimensional. There are problems with both approaches, but they'll be able to continue doubling transistor count if they figure out how to do this, for a time.

  • Code Morphing... (Score:1, Interesting)

    by Anonymous Coward on Tuesday May 04, 2010 @10:44AM (#32085066)

    It doesnt suprise me since hired a bunch of ex-Transmeta engineers to work for them last year. They are more than likely working on running the GPU with a bios to boot to whatever instruction set they want on the GPU. That would completely negate Moores Law since packing cores on a chip would directly effect performance.

  • Re:Umm? (Score:3, Interesting)

    by fuzzyfuzzyfungus ( 1223518 ) on Tuesday May 04, 2010 @11:12AM (#32085526) Journal
    Certainly, there are challenges to Moore's law, either fundamental physics or sheer manufacturing difficulty; but they have nothing to do with what the transistors are for(aside from modest differences if the issues have to do with manufacturing difficulties: If your 10nm process is plagued by high defect rates, it is probably easier to build SRAM, with tiny functional blocks, test for bad ones, encode the bad block addresses in a little onboard ROM, and have the motherboard BIOS do some remapping tricks to avoid using those than it is to build CPUs, with large functional blocks, and get pitiful yields).

    As for applications, there are definitely huge numbers of them that will see little or no benefit from more cores(either because their devs are lazy/incompetent, or because customers won't pay enough for them to justify the greater costs of dealing with hairy parallelism bugs, or because they depend on algorithms that are fundamentally linear). However, because of servers and virtualization, the demand for more cores should continue unabated on the high end for as long as vendors are able to deliver. If your enterprise has tens or hundreds of thousands of distinct processes, or tens of thousands of distinct VMs, you already posses a crude sort of parallelism, even if every single one of those is dumb as a rock and can only make use of a single core.
  • by JumpDrive ( 1437895 ) on Tuesday May 04, 2010 @12:43PM (#32087120)
    The CPU industry has been developing quad cores and releasing 8 cores. But a lot of my software can't take advantage of this.
    We just bought the latest version of software from one company and found that it ran a lot slower than the earlier version. I happened to stick it on a VM with only one core and it worked a lot faster.
    We talked about MATLAB yesterday not being able to do 64 bit integers, big deal. I was told that their Neural Network package doesn't have parallel processing capabilities. I was like you have got to be freaking kidding me. A $1000 NN package that doesn't support parallel processing.
  • Re:An observation (Score:3, Interesting)

    by timeOday ( 582209 ) on Tuesday May 04, 2010 @03:03PM (#32089294)

    The law states that the number of transistors on a chip that you can buy for a fixed investment doubles every 18 months. CPUs remaining the same speed but dropping in price would continue to match this prediction

    That is not sustainable at all. Let's say we reach the magic number of 1e10 transistors and nobody can figure out how to get performance gains from more transistors. If the price dropped 50% every 18 months, after 10 years CPU costs will drop by 99.1%. Intel's flagship processor would be about $20, but most of the CPUs they sell (nice workaday CPUs) about $1.50. There's no way they can live on that.

Make sure your code does nothing gracefully.

Working...