Moore's Law Will Die Without GPUs 250
Stoobalou writes "Nvidia's chief scientist, Bill Daly, has warned that the long-established Moore's Law is in danger of joining phlogiston theory on the list of superseded laws, unless the CPU business embraces parallel processing on a much broader scale."
An observation (Score:5, Informative)
Moore's is not a law, but an observation!
Re:An observation (Score:5, Funny)
Guy who sells GPUs says if people don't start to buy more GPUs, computers are DOOMED.
I don't know about you, but I'm sold.
Re: (Score:3, Insightful)
Yeah, pretty much this. It's akin to the oil companies advertising the fact that you should use oil to heat your home...otherwise, you're wasting money!
Heat and power consumption. (Score:5, Insightful)
Re: (Score:3, Insightful)
The article has Dally advocating more efficient processing done in parallel. The potential benefits of this are obvious if you've compared the power consumption of a desktop computer decoding h.264@1080p in software (CPU) and in hardware (GPU). My own machine, for example, consumes less than 10W over idle (+16%) when playing 1080p, and ~30W over idle (+45%) using software decoding. And no fires. See also the phenomenon of CUDA and PS3s being used as mini supercomputers, again, presumably without catching on
Re: (Score:2)
He's conflicted, but he's still right (Score:3, Insightful)
Obviously there's a conflict-of-interest here, but that doesn't mean the guy is necessarily wrong. It just means you should exercise skepticism and independent judgment.
In my independent judgment, I happen to agree with the guy. Clockspeeds have been stalled at ~ 3Ghz for nearly a decade now. There are only so many ways of getting more per clock cycle and radical parallelization is a good answer. Many research communities, such as fluid dynamics, are already performing real computational work on the GP
Re: (Score:3, Funny)
Right, there's no physical reason that the rate couldn't be higher or lower.
However, it is a well-fit trend. He saw the trend, and predicted it should continue for at least 10 years. It has continued for much longer than that.
My complaint is with using it as a buzz-word for a completely unrelated phenomenon. He might as well claim that we need to use parallel GPUs to promote synergy in the next paradigm shift, and leverage the dynamic long-tail proactively.
Re: (Score:3, Insightful)
It is also a modestly self-fulfilling prediction, as planners have had it in mind as they were setting targets and research investments.
Re:An observation (Score:5, Informative)
Re:An observation (Score:4, Insightful)
yep, the "law" basically results in one of two things, more performance for the same price, or same performance for cheaper price.
thing is tho that all of IT is hitched on the higher margins the first option produces, and do not want to go the route of the second. The second however is what netbooks hinted at.
The IT industry is used to be boutique pricing, but is rapidly dropping towards commodity.
Re: (Score:2)
The IT industry is used to be boutique pricing, but is rapidly dropping towards commodity.
Exactly.
I recently upgraded my 3 year old computer from a 2.6Ghz dual core to a 3.4Ghz quad core. Well, with overclocking 3.0Ghz vs 3.7Ghz.
Honestly enough, I upgraded more for compatibility with the newest videocards than for CPU reasons. Well, that and my 'server', IE the next older computer was an older single core unit with AGP graphics, to give you a clue on it's age.
I'm not that impressed. And that's a problem. If my $1k upgrade over a 3 year old $1k upgrade* doesn't impress me, then I'm not going
Re: (Score:3, Insightful)
The law states that the number of transistors on a chip that you can buy for a fixed investment doubles every 18 months. CPUs remaining the same speed but dropping in price would continue to match this prediction as would things like SoCs gaining more domain-specific offload hardware (e.g. crypto accelerators).
Actually, parallel processing is completely external to Moore's Law, which refers only to transistor quantity/size/cost, not what they are used for.
So while he's right that for CPU makers to continue to realize performance benefits, parallel computing will probably need to become the norm, it doesn't depend upon nor support Moore's Law. We can continue to shrink transistor size, cost, and distance apart without using parallel computing; similarly by improving speed with multiple cores we neither depend up
Re: (Score:3, Interesting)
That is not sustainable at all. Let's say we reach the magic number of 1e10 transistors and nobody can figure out how to get performance gains from more transistors. If the price dropped 50% every 18 months, after 10 years CPU costs will drop by 99.1%. Intel's flagship processor would be about $
Re: (Score:2)
Re: (Score:2)
So? Welcome to science!
A whole lot of "laws" were formulated and used, considered correct and useful until at one day they were proven incorrect. Considering how insignificant Moore's law is when it comes to the scientific community, I could think of worse contradictions.
Re: (Score:2)
Re: (Score:2)
I was merely trying to point out that laws offer no explanation. If an explanation was offered, we get a theory. Newton had laws of motion. Einstein had the theory of gravity. One attempts and explanation the rather does not. So Just calling it an observation isn't as devastating a blow to a law as one might think.
It fits a law because it it predictive and simple.
What's a 'law'? (Score:3, Insightful)
As for a scientific law, 'laws' in science are like version numbers in software:
There's no agreed-upon definition whatsoever, but for some reason, people still seem to attribute massive importance to them for some reason.
If anything a 'law' is a scientific statement that dates from the 18th or 19th century, more or less.
Hooke's law is an empirical approximation.
The Ideal Gas law is exact, but only as a theoretical limit.
Ohm's law is act
Re: (Score:2)
Nope, it's a law:
http://www.merriam-webster.com/dictionary/law [merriam-webster.com] (definition #1 even!)
http://en.wikipedia.org/wiki/Law [wikipedia.org]
Please people, stop making yourselves look foolish claiming Moore's Law isn't a law. This comes up every time!
Re: (Score:2, Funny)
Re: (Score:2)
It's a purely economic law, not a technical or physical law. It simply states that apparently chip manufacturers need their chips to be certain percentage better than the predecessor's, otherwise consumers will walk over to the competitor that can offer it.
Moore was an engineer, not a project manager or accountant. It has absolutely nothing to do with people buying things, it's a purely technical observation. What he actually said was:
The complexity for minimum component costs has increased at a rate of roughly a factor of two per year... Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years.
What you claim can certainly be interpreted as following from Moore's Law, but it has nothing to do with Moore himself or what he actually said.
I am The Law (Score:5, Informative)
I didn't realise Moore's Law was purely the driving force behind CPU development and not just an observation on semiconductor development. Surely we just say Moore's Law held until a certain point, then someone else's Law takes over?
As for Phlogiston theory - it was just that, a theory which was debunked.
Technology development vs. natural laws (Score:2)
Moore's law is describing the human abilities to make better processes leading to better miniaturization, leading to more precise printing of higher density transistors on smaller spaces. It is not a law that concerns natural processes, obviously -- And although it does hold true for now, it is bound to reach an end of life.
Moore's law will not be debunked, but we will surely go past it sooner or later. We cannot keep shrinking transistor size forever, as molecules and atoms give us an absolute minimum size
Re: (Score:2)
Moore's law will not be debunked, but we will surely go past it sooner or later.
Moore's law is not a law, nor even a theory. It is an observation, nothing more.
It can't be debunked by definition, as debunking (proving wrong) can only happen when a statement claims to prove something in the first place.
An observation always remains true no matter (and despite of) its predictive powers.
If I see a blue butterfly today, and tomorrow something happens to cause all blue butterflies to go extinct or something, that 100% will change any future predictions based on my observation. It does not
Objectivity? (Score:5, Insightful)
The industry has moved away from "more horsepower than you'll ever need!" to "uses less power than you can ever imagine!" Perpetuating Moore's Law isn't an industry requirement, it's a prediction by a guy who was in the chip industry.
Re:Objectivity? (Score:5, Interesting)
The industry has moved away from "more horsepower than you'll ever need!" to "uses less power than you can ever imagine!"
As someone who still spends way too much time waiting for computers to finish tasks, I think there's still room for both. What we really want is CPUs that are lightning-fast and likely multi-parallel (and not necessarily low-power) for brief bursts of time, and low-power the rest of the time.
My CPU load (3Ghz Core 2 Duo) is at 60% right now thanks to a build running in the background. More power, Scotty!
Re: (Score:2)
Your CPU spends the vast majority of it's time waiting ....or doing stuff that the operating system thinks is important and you don't ...
If your CPU is not at 100% then the lag is not due to the CPU
Re: (Score:2)
Get faster disks till your CPU is at 100% if you want a faster build.
Re:Objectivity? (Score:5, Insightful)
If your CPU is running at 60%, you need more or faster memory, and faster main storage, not a faster CPU. The CPU is being starved for data. More parallel processing would mean that your CPU would be even more underutilized.
Re: (Score:2)
Closed source computation won't fly (Score:3, Insightful)
Perhaps nVidia's chief scientist wrote his piece because nVidia wants its very niche CUDA/OpenCL computational offering to expand and become mainstream. There's a problem with that though.
The computational ecosystems that surround CPUs can't work with hidden, undocumented interfaces such as nVidia is used to producing for graphics. Compilers and related tools hit the user-mode hardware directly, while operating systems fully control every last register on CPUs at supervisor level. There is no room for nV
Re: (Score:2)
I would agree with you if ATI's supposed commitment to open source had any impact on reality.
Has ATI's commitment to open source provided timely Catalyst drivers for the year old Fedora 12 release on my 3-month old (at the time of install) laptop? Oh rig
Re: (Score:2)
The industry has moved away from "more horsepower than you'll ever need!" to "uses less power than you can ever imagine!"
Personally I think the form factor will be the defining property, not the power. There's some things you'd rather do on your phone, some you'd rather do on your laptop and some you'd rather have a full size keyboard, mouse, screen etc. for. Maybe there's room for an iPad in that, at least people think there is. Even if all of them would last 12 hours on battery you'd not like to carry a laptop 24/7 or try typing up a novel on a smart phone. I think we will simply have more gadgets, not one even if it runs o
Nvidia says GPUs are the future? (Score:5, Insightful)
So, a graphics card manufacturer says that graphics cards are the future? And this is news?
Re:Nvidia says GPUs are the future? (Score:5, Funny)
THIS! IS! SLASHDOT!
Re: (Score:2)
Oh boy. I can already see the YouTube videos popping up...
Re: (Score:2)
Just a few years behind the meme...
Re:Nvidia says GPUs are the future? (Score:5, Informative)
Marketing guy?
Before going to nvidia maybe two years ago, Bill Daly was a professor in (and the chairman of) the computer science department at Stanford. He's a fellow of the ACM, IEEE, an AAAS.
http://cva.stanford.edu/billd_webpage_new.html
You might criticize this position, but don't dismiss him as a marketing hack. NVidia managed to poach him from Stanford to become their chief scientist because he believed in the future of GPUs as a parallel processing tool, not that he began drinking the kool-aid because he had no other options.
Moores law will apply until it doesn't (Score:5, Informative)
Once transistors get below a certain size, of course it will end. Parallel or serial doesn't change things. We either have more processors in the same space, more complex processors or simply smaller processors. There's no "saving" to be done.
Re:Moores law will apply until it doesn't (Score:4, Insightful)
Hence the need to embrace parallel processing. But the trend seems to be heading toward multiple low power RISC cores, not offloading processing to the video card.
Re:Moores law will apply until it doesn't (Score:4, Informative)
but parallel is not a magic bullet. Unless one can chop the data worked on into independent parts that do not influence each other, or do so minimally, the task is still more or less linear and so will be done at core speed.
the only benefit for most users is that one is more likely to be doing something while other, unrelated, tasks are done in the background. But if each task wants to do something with storage media, one is still sunk.
Re: (Score:3, Insightful)
Parallel is a decently magic bullet. The number of interesting computing tasks I've seen that cannot be partitioned into parallel tasks has been quite small. That's why 100% of the top 500 supercomputers are parallel devices.
Re: (Score:3, Insightful)
I don't understand ... user interaction on the desktop uses typically 1% of a modern cpu.
Virus scans are getting to be more cpu bound if you have an SSD.
Re: (Score:2)
GPU offloading has appeared with GPGPU. For example, Windows 7 can perform video transcoding using GPGPU [techeta.com] on the Ion.
Not, it's not as useful as a general chip like the CPU, but with software support it can speed up some tasks considerably.
Parallel processing isn't magic. (ReMoores law... (Score:3, Insightful)
Nine processors can't render an image of a baby in one system clock tick, sonny.
inevitable (Score:5, Insightful)
Re:inevitable (Score:4, Interesting)
At some point, they'll realize that instead of making the die features smaller, they can make the die larger. Or three-dimensional. There are problems with both approaches, but they'll be able to continue doubling transistor count if they figure out how to do this, for a time.
Re: (Score:2)
but they'll be able to continue doubling transistor count if they figure out how to do this, for a time.
32mn process is off the shelf today. Silicon lattice spacing 0.5 nm. Single atom "crystal" leaves factor of 60 possible. Realistically, I think they're stuck at one order of magnitude.
At best, you could increase CPU die size by two orders of magnitude before the CPU was bigger than my phone or laptop.
Total 3 orders of magnitude. 2^10 is 1024. So, we've got, at most, 10 more doublings left.
Re: (Score:2)
Who says we have to keep using silicon?
Re: (Score:2)
Who says we have to keep using silicon?
Without any numbers at all, the density of crystalline "stuff" doesn't vary by much more than an order of magnitude, and silicon's already on the light end of that scale, compared to iron, tungsten, etc.
But, I'll humour you. Lets consider humble Litium. With a Van der Waals radius around .2 nm. Not going to gain very much over silicon. And there are slight problems with the electrical characteristics. On the good side, you could make something that looks vaguely like a transistor out of lithium. On th
Re: (Score:2)
Re: (Score:2)
If we (humanity) figures out how to perform construction tasks on the nano-scale level in large scale (Ok, large for the nano scale), we can surpass the physical limits you posted.
A big *if* of course, but we are making progress even now. Most people don't ponder 'if' anymore, only 'when'.
Scientists feel much more comfortable stating the limits of physics, which we mostly know (and any inaccuracies will just raise the bar, not lower it)
Only so much matter and energy can be in a given space at a time, and t
Re: (Score:2)
The Core i7 965 using 7zip as a benchmark rates out at 18 billion instructions per second. That would be 18.4 trillion instructions per second after 10 more doublings.
To put this in context, high definition 1080p30 video throws 62.2 million pixels per second. That i7 965 could use 289 instructions per pixel, while that 1024x computer could use 295936 instructions per pixel.
Translation: The future is still a hell of a lot better.
Re: (Score:2)
Umm? (Score:5, Insightful)
Moore's law, depending on the exact formulation you go with, posits either that transistor density will double roughly every two years or that density at minimum cost/transistor increases at roughly that rate.
It is pretty much exclusively a prediction concerning IC fabrication(a business that NVIDIA isn't even in, TSMC handles all of their actual fabbing), without any reference to what those transistors are used for.
Now, it is true that, unless parallel processing can be made to work usefully on a general basis, Moore's law will stop implying more powerful chips, and just start implying cheaper ones(since, if the limits of effective parallel processing mean that you get basically no performance improvements going from X billion transistors to 2X billion transistors, Moore's law will continue; but instead of shipping faster chips each generation, vendors will just ship smaller, cheaper ones).
In the case of servers, of course, the amount of cleverness and fundamental CS development needed to make parallelism work is substantially lower, since, if you have an outfit with 10,000 apache instances, or 5,000 VMs or something, they will always be happy to have more cores per chip, since that means more apache instances for VMs per chip, which means fewer servers(or the same number of single/dual socket servers instead of much more expensive quad/octal socket servers) even if each instance/VM uses no parallelism at all, and just sits at one core = one instance.
Re: (Score:3, Interesting)
Who would have thunk it (Score:4, Interesting)
Guy at company that does nothing but parallel processing says that parallel processing is the way to go.
Moore's law has to stop at some point. It's an exponential function after all. Currently we are at in the 10^6 range (2,000,000 or so), our lower estimates for atoms in the universe are 10^80.
(80 - 6) * (log(10)/log(2)) = 246.
So clearly we are going to reach some issues with this doubling thing in sometime in the next 246 more doubles...
Re: (Score:3, Insightful)
Parallel processing *is* the way to go if we ever desire to solve the problem of AI.
Human brains have a low clock speed, and each processor (neuron) is quite small, but there are a lot of them working at once.
Just because he might be biased doesn't mean he's wrong.
--
BMO
Re: (Score:2)
I don't care about AI (he says ignoring that his PhD dissertation was in the fringe of god-damn-AI)...
I actually do agree with his fundamental claim, doesn't change that you need to find someone else to say it for an article that isn't just PR.
Re: (Score:2)
Human brains have a low clock speed, and each processor (neuron) is quite small, but there are a lot of them working at once.
The brain's trillions of 3D interconnections blow away anything that has ever been produced on 2D silicon.
Current parallel processing efforts are hardly interconnected at all, with interprocessor communication being a huge bottleneck. In that sense, the brain is much less parallel than it seems. Individual operations take place in parallel, but they can all intercommunicate simultaneously to become a cohesive unit.
To match the way the brain takes advantage of lots of logic units, current computer architectu
Re: (Score:2)
Current parallel processing efforts are hardly interconnected at all, with interprocessor communication being a huge bottleneck.
yes, but this is because we demand accuracy and determinism from our silicon.
Even in the case of individual neurons, the same inputs dont always throw the same output, or at least not within a predictable time-frame. Its sloppy/messy stuff happening in our brain. The 'trick' to AI may in fact be the sloppy/messy stuff forcing the need for high (but also sloppy/messy) redundancy.
Re: (Score:2)
Human brains have nothing whatsoever in common with modern computers, and making facile comparisons is counter-productive.
Re: (Score:2, Informative)
Re: (Score:2)
CPUs with a "feature size" of about 22nm are currently in development. A silicon atom is 110pm across, with the largest stable atoms being about 200 pm. In other words, CPU features are currently about 100-200 atoms across. Can't increase too many more times before that becomes a problem...
Re: (Score:2)
I don't disagree, but if you are going to put up a story on it at least find one written by someone with just a little less bias.
I'm sure there are lots, since it's a pretty obvious fact that you get more bang for your buck (and maybe more importantly for your power) from more less powerful units in parallel than fewer big units. Well if the programmers would get with the damn program, anyway. Someone not writing PR for a GPU company must have written one...
Infinite? (Score:3, Informative)
Dude, this is clearly some sense of the word "infinite" of which I haven't been previously aware. A couple things: 1) atoms -> electrons -> quarks is three levels, which is not exactly infinity. 2) I'm not sure if this is what you meant, but electrons are not made of quarks. They're truly elementary particles. 3) No one thinks there's anything below quarks - the Sta
Re: (Score:2)
The pedant in me feels the need to point out that quarks are several times more massive than electrons (which I guess is as close to the concept of larger/smaller as you can get here).
HDLs (Score:2)
Sometimes I think that parallel programming isn't a "new challenge" but rather something that people do every day with VHDL and Verilog...
(Insert your own musings about FPGAs and CPUs and the various tradeoffs to be made between these two extremes.)
Re: (Score:2)
The relative complexity of a C++ program vs what someone can realistically do in HDL is vastly different. Try coding Office in HDL and watch as you go Wayne Brady on your computer.
Re: (Score:2)
Most modern CPUs and the compilers for them are simply not designed for multiple threads/processes to interact with the same data. As an excersize, try writing a lockless single-producer single-consumer queue in C or C++. If you could make the same assumption in this two-thread example that you can make in a single-thread problem, namely that the perceived order of operations is the order that they're coded, then it'd be a snap.
But you see, once you start playing with more than one thread of execution, yo
Re: (Score:2)
Atomicity is a whole different level of fun as well. I was lucky, at the boundary I was dealing with inherently atomic operations (well, so-long as I have my alignment correct, (not guaranteed by new)), but if you're not... it's yet more architecture-specific code.
That's also the main complication that I raise when the conversation comes around to personality uploading - the brain is a clockless system with no concept of atomicity at all. How do you take a "snapshot" of that?
Re: (Score:2)
The solution for mutli-tasking & FPGAs is to have multiple FPGA sub-units available, and limit access to them to device drivers running in kernel space. Need a new FPGA program? Load a driver for it -- if no more FPGA units are available, the driver doesn't load.
Consider the source, folks... (Score:2)
Seriously. The headline for this should read "Moore's Law will Die Without GPUs, Says GPU Maker ."
Or, to put it another way, the GPU maker keeps invoking Moore's Law, but I do not think it means what he thinks it means. You can't double semiconductor density by increasing the number of chips involved.
Re: (Score:2)
Even better. The headline should represent what the fuck the article says.
Let's not play fast-and-loose with the word "law." (Score:5, Informative)
I'm probably being overly pedantic about this, but of course the word "law" in "Moore's Law" is almost tongue-in-cheek. There's no comparison between a simple observation that some trend or another is exponential--most trends are over a limited period of time--and a physical "law." Moore is not the first person to plot an economic trend on semilog paper.
There isn't even any particular basis for calling Moore's Law anything more than an observation. New technologies will not automatically come into being in order to fulfill it. Perhaps you can call it an economic law--people will not bother to go through the disruption of buying a new computer unless it is 30% faster than the previous one, therefore successive product introductions will always be 30% faster, or something like that.
In contrast, something like "Conway's Law"--"organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations"--may not be in the same category as Kepler's Laws, but it is more than an observation--it derives from an understanding of how people work in organizations.
Moore's Law is barely in the same category as Bode's Law, which says that "the radius of the orbit of planet #N is 0.4 + 0.3 * 2^(N-1) astronomical units, if you call the asteroid belt a planet, pretend that 2^-1 is 0, and, of course, forget Pluto, which we now do anyway."
How to solve it - rename "Moore's Law"? (Score:4, Insightful)
Moore's Law isn't exactly "a law". It isn't like "the law of gravity" where it is a certain thing that can't be ignored*. It's more "Moore's Observation" or "Moore's General Suggestion" or "Moore's Prediction". Any of those are only fit for a finite time and are bound to end.
* Someone's bound to point out some weird branch of Physics that breaks whatever law I pick or says it is wrong, but hopefully gravity is quite safe!
Re: (Score:2)
The end of Moore's Law would be good (Score:2)
It would mean that development cycles slow down, algorithmics finally win over brute force and that software quality would have a chance to improve (after going downhill for a long time).
GPUs as CPUs? Ridiculous! Practically nobody can program them and very few problems benefit from them. This sounds more like Nvidia desperately trying to market their (now substandard) products.
Re: (Score:2)
Algorithms won over brute force a long time ago. We're using brute force on the good algorithms!
Seriously, there are very few big CPU tasks that have not had a LOT of smart people look at the algorithms. The idea that we'll suddenly take a big leap in algorithmic efficiency when Moore's law ends is laughable.
Moore's Law (Score:2)
What the hell does Moore's Law have to do with parallel computing?
It is concerned with the number of transistors on a single chip. Moore's Law has been dead in practical terms for a while (smaller transistors are too expensive / require too much power and cooling), which is the reason parallel computing is becoming essential in the first place.
TFA fails computer science history forever.
GPUs are hardly in better shape (Score:2)
Aside from being a bit of a hack, there are 3 competing APIs and some of them are tied to certain combinations of operatin
Re: (Score:2)
the traditional method of CPU usage is a hact by that standard as well. BIOS loads the CPU information, CUDA just add another layer.
which leads me to wonder, do we really needs multi core CPUs? perhaps we just need a CPU that can handle the throughput for running the OS and its most basic functions, and actually pass off all other processes to dedicated components.
In other news... (Score:3, Funny)
Albert P. Carey, CEO of Frito-Lay warns consumers that the continuation of the Cheddar-Dorito law and the survival of humanity ultimately relies on zesty corn chips.
SlashScript (Score:2)
if ( story.contains('Moore\'s') and story.contains('die','dead','end in'):
story.comment('Moore\'s Law is an observation not a law! and.... IT WILL NEVER DIE!!!')
Theories, theories. (Score:2)
maybe nvidia should stop making space heaters? (Score:2)
seriously, in the last few years Intel has produced some good CPU's with good power efficiency. contrast that with Nvidia where every generation you need more and more power to power their cards and the latest generation is something like 250W of heat. years ago we used a compaq all in one cluster server at work as a space heater, the way nvidia is going all you need to do is buy one of their cards and you can heat your house in the winter and not buy heating oil
Re: (Score:2)
That's video cards in general. While a CPU is 'fast enough' a video game wants real time physics, and realistic graphics, and that usually means more power.
There is an end... (Score:2)
Moore's Law works until the required elements are smaller than quantum objects. Actually, in our current state of technology and anything practical on the horizon, it works until the required elements are smaller than single atoms. Then there is no way to make stuff faster...
Sort of.
While GPUs might 'save Moore's Law', actually they just add other CPUs to each system. So more cores = more performance, and Moore's Law is still relevant.
Now, to change the entire computing paradigm to actually take advantag
Moores law is frequently misunderstood (Score:2)
though I rarely see the usual mistakes being made by the slashdot community.
I tried explaining to a friend of mine why it was that, in 2004 his standard desktop configuration had a CPU clocking at 2Ghz, and the standard configuration of the machines available last christmas had CPU's clocking in at 2.4Ghz (in the same price range). He seemed to think it would be in the 8-10Ghz range by now.
Re: (Score:2)
He's assumption is true based on historic Computer performance gains.
On /.. Moore's law is often misunderstood. This 'scientist' doesn't use it correctly in the article, either.
And what the hell does a chief scientist and Nvidia do? I'd like to see some of his published experiments and data.
Please fire his ass (Score:2)
for not knowing what Moore's law is.
"ntel’s co-founder Gordon Moore predicted that the number of transistors on a processor would double every year, and later revised this to every 18 months.
well, thats half of the rule.
Moore, meet Amdahl (Score:2)
Sure, you can add more transistors. And you can use those transistors to add more cores. But how useful will they be? That's what Amdahl's Law [wikipedia.org] tells you. And Amdahl's Law is harder to break than Moore's.
GPUs only add one more dimension to Amdahl's Law: what portion of the parallelizable portion of a problem is amenable to SIMD processing on a GPU, as opposed to MIMD processing on a standard multi-core processor.
I think he is beating on the wrong people (Score:3, Interesting)
We just bought the latest version of software from one company and found that it ran a lot slower than the earlier version. I happened to stick it on a VM with only one core and it worked a lot faster.
We talked about MATLAB yesterday not being able to do 64 bit integers, big deal. I was told that their Neural Network package doesn't have parallel processing capabilities. I was like you have got to be freaking kidding me. A $1000 NN package that doesn't support parallel processing.
FAIL (Score:2)
Re: (Score:3, Insightful)
Well, without the slowness of windows, we wouldn't need faster computers, so there'd be nothing driving innovation.
Re: (Score:2, Funny)
Re: (Score:2)
affect
Re: (Score:3, Informative)
He doesn't say that is should be done via the GPU.
He says Intel and AMD need to focus on Parallelism. This is true.
The GPU/CPU comment was driven by the author of the article. Clearly as an attempt to drum up some sort of flame war to drive hits to the article.
Now, I would assume part of his job is to figure out how to properly do that with GPUs; however at no place is he implying only Nvidia can do this and it can only be dong on the GPU.
Re: (Score:2)
It isn't an "if". It will fail, because nature doesn't provide useful building blocks smaller than an atom.
Currently we are at 32 nm-scaled technology. The characteristic size of an atom is approximately twice the Bohr radius, or ~ 1 angstrom. So our transistor size is roughly 320 times larger than a hydrogen atom. Now, a hydrogen atom sized transistor is a pipe dream, and things will break down far sooner than that, but it constitutes a boundary beyond which no transistor resembling anything we have today