Moore's Law Will Die Without GPUs 250
Stoobalou writes "Nvidia's chief scientist, Bill Daly, has warned that the long-established Moore's Law is in danger of joining phlogiston theory on the list of superseded laws, unless the CPU business embraces parallel processing on a much broader scale."
Who would have thunk it (Score:4, Interesting)
Guy at company that does nothing but parallel processing says that parallel processing is the way to go.
Moore's law has to stop at some point. It's an exponential function after all. Currently we are at in the 10^6 range (2,000,000 or so), our lower estimates for atoms in the universe are 10^80.
(80 - 6) * (log(10)/log(2)) = 246.
So clearly we are going to reach some issues with this doubling thing in sometime in the next 246 more doubles...
Re:Objectivity? (Score:5, Interesting)
The industry has moved away from "more horsepower than you'll ever need!" to "uses less power than you can ever imagine!"
As someone who still spends way too much time waiting for computers to finish tasks, I think there's still room for both. What we really want is CPUs that are lightning-fast and likely multi-parallel (and not necessarily low-power) for brief bursts of time, and low-power the rest of the time.
My CPU load (3Ghz Core 2 Duo) is at 60% right now thanks to a build running in the background. More power, Scotty!
Re:inevitable (Score:4, Interesting)
At some point, they'll realize that instead of making the die features smaller, they can make the die larger. Or three-dimensional. There are problems with both approaches, but they'll be able to continue doubling transistor count if they figure out how to do this, for a time.
Code Morphing... (Score:1, Interesting)
It doesnt suprise me since hired a bunch of ex-Transmeta engineers to work for them last year. They are more than likely working on running the GPU with a bios to boot to whatever instruction set they want on the GPU. That would completely negate Moores Law since packing cores on a chip would directly effect performance.
Re:Umm? (Score:3, Interesting)
As for applications, there are definitely huge numbers of them that will see little or no benefit from more cores(either because their devs are lazy/incompetent, or because customers won't pay enough for them to justify the greater costs of dealing with hairy parallelism bugs, or because they depend on algorithms that are fundamentally linear). However, because of servers and virtualization, the demand for more cores should continue unabated on the high end for as long as vendors are able to deliver. If your enterprise has tens or hundreds of thousands of distinct processes, or tens of thousands of distinct VMs, you already posses a crude sort of parallelism, even if every single one of those is dumb as a rock and can only make use of a single core.
I think he is beating on the wrong people (Score:3, Interesting)
We just bought the latest version of software from one company and found that it ran a lot slower than the earlier version. I happened to stick it on a VM with only one core and it worked a lot faster.
We talked about MATLAB yesterday not being able to do 64 bit integers, big deal. I was told that their Neural Network package doesn't have parallel processing capabilities. I was like you have got to be freaking kidding me. A $1000 NN package that doesn't support parallel processing.
Re:An observation (Score:3, Interesting)
That is not sustainable at all. Let's say we reach the magic number of 1e10 transistors and nobody can figure out how to get performance gains from more transistors. If the price dropped 50% every 18 months, after 10 years CPU costs will drop by 99.1%. Intel's flagship processor would be about $20, but most of the CPUs they sell (nice workaday CPUs) about $1.50. There's no way they can live on that.