Limits to Moore's Law Launch New Computing Quests 74
tringtring alerts us to news that the National Science Foundation has requested $20 million in funding to work on "Science and Engineering Beyond Moore's Law." The PC World article goes on to say that the effort "would fund academic research on technologies, including carbon nanotubes, quantum computing and massively multicore computers, that could improve and replace current transistor technology." tringtring notes that quantum computing has received funding on its own lately, and work on multicore chips has intensified the hunt for parallel programming. Also, improvements are still being made to current transistor mechanics.
Is this necessary? (Score:2, Insightful)
Re: (Score:3, Insightful)
Re: (Score:2)
But, it's just a request for funding, not a prize, so it doesn't matter.
Re:Is this necessary? (Score:4, Informative)
Re:Is this necessary? (Score:5, Insightful)
Who said anything about a prize? The PC World article talks about 'funding for research', i.e. cash given to researchers to develop new technology.
Unlike space travel, reearch in chip design have shown to be profitable at the commercial level, [...] Whether or not a prize is offered, faster computers and better technology are what we as consumers expect in this area, and what we will pay for.
It's true that a lot of commercial effort goes into current chips and the improvement thereof, but there isn't much commercial effort going into areas like quantum computing because the potential rewards are a loooooong way off. Your money is much safer invested in designing a 32-core Core2ThirtyTwo to be made in 3 years, compared to quantum computing, a technology that faces substantial scalability roadblocks and that no-one knows how to design algorithms for.
Most of the current quantum computers which have been demonstrated rely on Nuclear magnetic resonance (NMR), but it is thought this technique will not scale well - it is believed less than 100 qubits would be possible. As of 2006, the largest quantum computer ever demonstrated was 12 qubits (making it capable of such tasks as quickly finding the prime factors of a number... as long as that number is less than 4096.
In summary, promising future technologies often make poor investments because they are (a) experimental and (b) a long way off. So some funding to make research possible wouldn't go amiss.
Just my $0.02.
Re: (Score:1, Offtopic)
-Albert Einstein
Moore's Law is bullshit. (Score:5, Insightful)
It just happened to work out that way. We're about to reach a point where current transistors won't cut in anymore. At such a point we'll either stagnate because we can't make a smaller process than 10 nanometer and we can't find a different functional tech, or we'll make an enormous jump in performance because we'll find something in a different field, be it optics or nano-tubing, that does make processors a lot faster.
Moore's law isn't a law, and should never have been called that way. It's merely a prognosis.
microprocessor technology is driven by the market. If the general consumer thinks their pc is fast enough, manufacturers will focus on energy-efficiency to sell more cpu's, and speed will start to be a secondary concern.
Re: (Score:2)
Re: (Score:1)
A law is an observation. That's what it means. There seems to be a misconception that "law" means "fundament law of nature which is 100% proven to be true", but that's not correct.
(True, you might say it's not a scientific law in that it's not an observation about scientific matters, but instead, an observation about technology, economics and other factors, but it's still a law. You might as well moan that it's not a legal law - big deal.)
Re: (Score:2)
Re:Moore's Law is really about price-performance (Score:3, Insightful)
So the expensive fast chips get faster to sell to customers with the need for speed, and the production technology gets r
Re: (Score:1)
Re: (Score:1)
Actually, I believe he based it on observed past behaviour, so even though he may have intended it to also be a prediction, calling it a law is fine.
I'm also confused by "merely" - I'd argue that saying it is a prognosis carries the implication that it will hold in future, whilst "law" implies that, just like other laws, it is merely a generalisation of observed behaviour.
Re: (Score:2)
So it's more like an economic law.
About your argument 'prognosis vs law', well, almost anything in economics is more a prognosis than a law, but whatever, nobody cares.
"If you build it, they will come..." (Score:5, Interesting)
Intel sunk billions into the development of Itanium on the premise that if they make a VLIW architecture, compiler developers will find a way to automatically extract the parallelism necessary to make good use of it. A company with the size, resources, and engineering knowledge of Intel made the mistake of assuming that a fundamental shift in thinking could be driven by money and sheer desire, but it turns out that the problem is not just hard - that would make it solvable given sufficient effort and money - it's actually impossible. Those compiler advances never materialized; you can't draw blood from a stone.
The quest for parallelism in ordinary software might just be similar. Developing tools to make this automated and easy with low overhead is akin to putting a dozen smart people in a room and saying "think up the next big idea that will make me millions." Innovation doesn't work that way; it can't be forced... and money isn't going to make the impossible into the possible.
I think we'll see a move to eight and then maybe even sixteen cores on a consumer-level chip before we see things start going back in the other direction. This will necessary mean a slowdown in the development of processors as CPU manufacturers go back to wringing every last bit of single-threaded performance out of their designs.
Thoughts?
video games (Score:1)
Re:"If you build it, they will come..." (Score:5, Interesting)
It's possible that if Itanium had been able to execute x64 or even x86 code at a competitive speed, we'd all be using IA-64 by now (or at least hoping that new programs were recompiled with it.)
Also, I don't actually think we'll have a shift back to single-threaded apps. The fact is that most programs run "fast enough" now, even single-threaded on quadcore systems. The ones that don't (mostly games and some professional software) are frequently relatively easy to multithread. I suspect most programs will stay single-threaded, and the ones that need maximum speed will become extremely multithreaded.
Re: (Score:2)
It's not that Eclipse has always been this heavy, rather, innovations in machine assistance expand the s
We *will* see single threaded apps... well sort of (Score:1)
Not completely true. We might still see single threaded at the conceptual level with languages supporting latent parallelism, even though the program flow is conceptually single threaded. Think parallel "for" loops and futures. The burden of actually distributing the parallel execution would be the language runtime's responsibility. This way you have code that acts and behaves like it is single threaded but actually scales on proc
Re: (Score:2)
Intel botched their first hack at Itanium. They weren't willing to pony up another couple of billion to get it right the second time. By then their performance war against AMD had set the
Re: (Score:1)
While I agree with some of your points, I disagree with your details. There's no proof that compilers can't be made smart enough for that - just because it didn't happen doesn't mean it couldn't. While I agree with some of your points, I disagree with your details. There's no proof that compilers can't be made smart enough for that - just because it didn't happen doesn't mean it couldn't.
In fact, I think people are working on dealing with things like Nested Data Parallelism [microsoft.com] (pdf) in compilers right now. I think this will happen in functional languages very, very soon (Haskell, someone below mentioned Erlang). Simpler things, like dealing with flat data parallelism via the compiler (+ a special library) have been possible for a while (see e.g. OpenMP [wikipedia.org]).
Re: (Score:2)
Re: (Score:2)
Parallelism for ordinary software its already here, it's a matter of time before it is adopted by mainstream applications.
Re: (Score:2)
Itanic failed because the machines had horrible price/performance except in very tiny niches. One of the things that killed Itanic is x86 clusters - aka. parallel programs.
Multicore processors, in contrast, are free. What I mean by that is this: Dual core processors cost basically the same as single core processors at the same core speed. You can still buy single core processors today, but nobody does - there's no reason not to take the free second core. For a variety of reasons, the same thing will be tru
Re: (Score:2)
By the time the systems were shipping and there was an mainstream OS (read "Windows") to run applications on it, the AMD64 and multi-core x86 processors were already appearing.
Had HP invested more on HP-UX over the years (making it escape the narrow niche they carved for it), had Linux been more mainstream by that timeframe (read "a decent desktop OS", which it kind of wasn't), had Intel invested a lot of resources making GCC deliver the promised performance o
The real question is (Score:3, Funny)
patents? (Score:3, Interesting)
~Neff
Re:patents? (Score:4, Insightful)
It's not a prize. It's funding; A budget. This is the older-than-dirt story of, "If you build it, they will come!" vs. "I have a 0.01% chance of succeeding if I try to build it, so who's going to feed my family in the 99.99% probable case that I fail?"
cost per computation / 3-D Chips (Score:5, Interesting)
http://mtlweb.mit.edu/researchgroups/icsystems/3dcsg/ [mit.edu]
And we'll still see the same exponential benefits to GOPs/$ for a long time after 3-D transistor density maxes out. The economics that drive the exponential cost-per-computation trend are more related to volume of demand which offsets high fixed production costs and less related to our ability to actually cram more transistors on a chip.
Re:cost per computation / 3-D Chips (Score:4, Insightful)
Re:cost per computation / 3-D Chips (Score:4, Interesting)
Re: (Score:3, Informative)
We're still -far- away from the theoretical limits though.
Flipping a -single- bit MUST consume atleast kT joules, where T is the temperature in Kelvin and k is the Bolzmann-constant of around 10^-23.
So if your cpu runs at 300K (cooling it more won't help because then you'll spend
Re: (Score:1)
Even if we keep getting exponential growth of transistors per dollar in the coming years, the question is what to do with them. Arranging them in useful circuits is increasingly difficult because at a certain point adding cache and execution units to a processor just isn't very helpful (hence multi-core).
I disagree with at least part of the above.
The problem is that you're not thinking outside the current box we're in. It's not that we have too many transistors and too much "cache", but rather that we have too few, and will continue to have too few for a few Moore's law generations yet.
Consider the effect once that "cache" reaches the half-gig and higher level (and consider that the current cache sizes are per-core, so multiply by the number of cores to get the total per chip size, since that's what we're
Re: (Score:2)
If someone were to try it, they better get working on methods to cool those stacks of wafers well, and ways to make the wafers cheaper...
If you make a chip with a stack of, say 10 wafers, you've also had to diffuse 10 wafers, costing, well, the same ten chips of only one wafer... Diffusing doesn't magically get cheaper when you stack the wafers afterwards. I'm sure the 'wafer-bonding' costs some dough too.
And it generates the heat of 10 chips of one
Re: (Score:2)
Widipedia [wikipedia.org]
Control waste heat by managing entropy.
Re: (Score:1)
Killer app? (Score:5, Funny)
Moore's Law might be linear but who's to say that demand for processing power is also...
Re: (Score:2)
"The minimum spec is 640 quantum cores."
Actually, it reads 640 universes. You are guaranteed to get the right answer in one of these.
Re: (Score:3, Interesting)
The short answer is all the applications that run in these computers [top500.org].
I can think of at least two applications that are often in the news: protein folding and physical simulations of continuous media, like weather and climate, aerodynamics, water, oil, and gas flow in porous rocks, etc.
But I think the future applications for personal supercomputers haven't been invented yet. We don't have the brains to predict what super-human artificial intel
Re: (Score:2)
(Yeah, we don't have the database that program would need yet. But it's already being worked on.)
You can also use them to do ray tracking in a changing 3-D environment. (Think realistic games. Lots of people will pay for that one.)
Re: (Score:1, Redundant)
Re: (Score:2)
Because processing power is not what will keep us from developing a Holodeck
However, we may very well achieve some more along the lines of The Matrix or James P. Hogan's star-spanning Visar AI. I'd say it's almost a given that we'll eventually develop some kind of direct neural interface into a computer-generated virtual reality.
Re:Killer app? Second Life (Score:2)
You are right that it will be like the matrix or Hogan's story (sounds like an interesting story, I'll have to check it out).
A fully virtualized environment benefits directly from precisely the same exponential improvements that have occurred and
Re: (Score:1, Offtopic)
Re: (Score:2)
Re: (Score:2)
Increased mechanisation and computer-power means a single individual can do more and more in the same time, which again leads to the p
Re: (Score:2)
I really don't see how that's possible. The entire US film industry's gross receipts [boxofficemojo.com] were only $10 billion last year. Microsoft's net income [yahoo.com] last year was $17 billion, and Microsoft is only ONE company in the tech industry.
Intel, one of the major producers of the actual processing power had a net profit of nearly $7 billion. [yahoo.com]
I think it's safe to
Re:Killer app? (Score:4, Insightful)
1) parallel search
2) accurate text translation
3) accurate human speech rendering
4) raytracing for 3d graphics
5) advanced physics in 3d applications
6) more dynamic programming languages
7) better video and audio decompression
8) much faster compression
9) ultra fast large WORD document repagination
etc
Re: (Score:1, Insightful)
Well, there are a couple, graphics processing for example. Governments in particular however might be interested in two different areas which would profit considerably from massively parallel computing: (semi-)brute force cryptanalysis and simulation (think weapons, in particular, nuclear ones since it's difficult and expensive to do real tests with them).
The Hunt (Score:1, Flamebait)
Breakin' da law, breakin' da law... (Score:2, Funny)
Officer : "Sir, I'll have to arrest you for breaking Moore's Law"
Intel exec : "Oh noes!"
Diamond is a virtual's best friend (Score:1)
$20 million is nothing (Score:2)
What about Single Core Performance (Score:1)
Re: (Score:1)
Automatic Parallelism is a falicy... (Score:2)
Being able to automate the task of sending off threads to various cores is pretty much and impossibility. The level of exceptions to any set of rules that allow the compiler or even a run time environment with managed code would be so large that the MCP would be in a constant busy state just figuring out if it was possible to send various threads of to n numbers of cores. much less keeping all the threads synced and sorting out the wait times for various threads on various CPU's to finally all be finished
What happened to GaAs FET and germanium? (Score:2)
What happened to Gallium Arsenide technology? It's supposed to be 10 times faster than silicon
And what about silicon germanium?