Intel Shows Off 80-core Processor 222
thejakebrain writes "Intel has built its 80-core processor as part of a research project, but don't expect it on your desktop any time soon. The company's CTO, Justin Rattner, held a demonstration of the chip for a group of reports last week. Intel will be presenting a paper on the project at the International Solid State Circuits Conference in San Francisco this week. 'The chip is capable of producing 1 trillion floating-point operations per second, known as a teraflop. That's a level of performance that required 2,500 square feet of large computers a decade ago. Intel first disclosed it had built a prototype 80-core processor during last fall's Intel Developer Forum, when CEO Paul Otellini promised to deliver the chip within five years.'" Update: 06/01 14:37 GMT by Z : This article is about four months old. We discussed this briefly last year, but search didn't show that we discussed in February.
IA64 (Score:5, Insightful)
It didn't work out too well for Intel.
Deja vu all over again (Score:4, Insightful)
Stop me if you've heard this one before...
core 2 duo has a higher transistor density? (Score:2, Insightful)
Re:core 2 duo has a higher transistor density? (Score:3, Insightful)
Not only a dupe... but of an old story (Score:5, Insightful)
Not to mention that Slashdot (even Zonk) Covered this LAST YEAR [slashdot.org].
But that's OK, I'm sure Slashdot gave insightful and cogent coverage of real events that actually matter to geeks on this site, you know, like the Release of a new major version of GCC [gnu.org]
Oh wait.... that (like a bunch of other actually interesting stories) would be in the aptly-named, sir not appearing on this website category due to it not making enough banner revenue.
Re:It may be known as "a teraflop", but... (Score:5, Insightful)
If we're going to be speaking strictly, get it right:
FLoating point Operations Per Second
AMD's response (Score:4, Insightful)
Besides, with most software being single-threaded I don't know if a consumer will immediately need more than 4 cores for a while. I can still see software companies trying to come up with ways to keep all 80 cores busy..."Well, they need at least 20 anti-virus processes, 10 genuine advantage monitors, and we'll install 100 shareware application with cute little icons in the task bar by default. There, that should keep all the cores nice and warm and busy -- our job is done!".
But in all seriousness, I would expect some extremely realistic environmental physical simulations (realtime large n-body interactions and perhaps realtime computational fluid dynamics)...now that's something to look forward to!
For the love of god... (Score:3, Insightful)
This isn't a general purpose processor. Think "cell processor" on a larger scale. You wouldn't be running your firefox or text editor on this thing. You'd load it up and have it do things like graphics processing, ray tracing, DSP work, chemical analysis, etc...
So stop saying "we already don't have multi-core software now!!!" because this isn't meant for most software anyways.
Tom
Re:For the love of god... (Score:4, Insightful)
So no, this model you're not going to be running Firefox or your text editor on (in fact, I doubt you even _could_ do this, these cores currently are very, very stripped down in their capacity to do work, to where they're basically two MACs tied to a small SRAM and a "network adapter"), but never-say-never, this style of chip is right around the corner.
Re:Frak everything, we're doing 80 blades (Score:2, Insightful)
http://money.cnn.com/2005/09/14/news/fortune500/g
Re:cue (Score:1, Insightful)
Why would this be? And what is with the mac pro nonsense? Do you really think only apple makes 8 core machines?
Re:Older Story (Score:3, Insightful)
But with single-thread performance growth at a virtual standstill, Moore's law is going to result in exponential growth in the number of cores, whether or not we're ready to write software for them.
I wonder if we won't move towards a more "biological" paradigm - massive parallelism, but with massive redundancy and therefore inefficiency from a computational standpoint, but also robustness to hardware and software bugs.