Forgot your password?
typodupeerror
Intel Supercomputing Hardware

Intel Squeezes 1.8 TFlops Out of One Processor 168

Posted by Hemos
from the that's-a-lotta-juice dept.
Jagdeep Poonian writes "It appears as though Intel has been able to squeeze 1.8 TFlops out of one processor and with a power consumption of 62 watts." The AP version of the story is mostly the same; a more technical examination of TeraScale is also available.
This discussion has been archived. No new comments can be posted.

Intel Squeezes 1.8 TFlops Out of One Processor

Comments Filter:
  • by tomstdenis (446163) <tomstdenis@@@gmail...com> on Monday February 12, 2007 @09:21AM (#17982226) Homepage
    The trick like SPEs is finding way to efficiently use them in as many tasks as they can.

    I'm glad to see Intel is using their size for more than x86 core production though.

    Tom
  • Just imagine (Score:2, Insightful)

    by andyck (924707) on Monday February 12, 2007 @09:35AM (#17982386)
    "Intel" "Introducing the NEW CORE 80, personal laptop supercomputer running Windows waste my ram and cpu cycles SP2 edition" But seriously this looks interesting for the future. Now we just need software to fully utilize multicore processors.
  • by Anonymous Coward on Monday February 12, 2007 @09:48AM (#17982542)
    Good for your for understanding that. Now if only you would make an effort to try to understand what xoyoboxoyobo wrote. (Hint: Nowhere in his comment does he equate flops with hertz.)
  • exaflop computers? (Score:3, Insightful)

    by peter303 (12292) on Monday February 12, 2007 @10:20AM (#17982956)
    Since petaflops are likely by the end of the decade its time to imagine exaflops in 2020.
  • by Joe The Dragon (967727) on Monday February 12, 2007 @10:29AM (#17983072)
    The FSB will be a big bottleneck even more so with the cpu needing to use to get to ram. You would need about 3-4 FSBs with 1-2 mb per core of L2 to make it fast.
  • by madhatter256 (443326) on Monday February 12, 2007 @10:41AM (#17983224)
    Yep. The only way to really use this effectively is to load it up with lots of bloatware. Imagine the tons of ads one can finally get with this type of CPU! doubleclick.net would seriously love this.

    People still effectively use processing power equivelant to that of an 800mhz Pentium 3 for basic stuff (and I'm just talking about Word processing, email, internet, no gaming) on average. Why would someone need a quad core CPU, and a crappy videocard just for surfing the net, typing, etc?

    In reality, that is what will ultimately happen. Just lots of stuff running in the background without us really noticing it. The speed and cores can make it easier to hide spyware in the background because you won't notice any slowdown in your system when the spyware loads, whereas if you have an older PC you will notice when something is running in the background as it will slow it down considerably. Bloatware will end up becoming tolerable when these types of CPUS start being put in desktop PCs. People will get used to it as much as most people tolerate spam in their email.
  • Narrow Minded (Score:4, Insightful)

    by Deltronica (1063232) on Monday February 12, 2007 @10:51AM (#17983364) Journal
    Many comments on this post are centered around the processor's use as a personal computing solution. There is much more to computing than PCs! When viewed alongside specialized programming technology, bioinformatics, neurology, and psychology, this (rather large) leap in processing power brings AI to yet another level, and continues the law of accelerated returns. I'm not saying "oh wow now we can have human-like AI", I'm just saying that the ability to process 1.8 Tflops is nothing to scoff. Personal computing is inane and almost moot when compared to the other applications that new processors may pave the way for. Know your facts, but use your imagination.
  • by vertinox (846076) on Monday February 12, 2007 @11:47AM (#17984108)
    Does this permit the practical use of any truly breakthrough apps?

    From my understanding perhaps with that many cores, the OS could simply allocate one application per core.

    But the OS has to support that feature or have applications that know how to call unused cores.

    From my understanding Parallels for OS X only uses one core and picks the second core to run on for the best performance.

    Of course then there are applications that could be programmed to use all the cores at once if they needed to do scientific calculations or something like Ray Tracing.
  • by Anonymous Coward on Monday February 12, 2007 @12:01PM (#17984294)
    While the ray tracing algorithm is embarrassingly parallel, I would imagine memory access is not. Having 80 cores accessing pretty much the same data (mainly textures) could be a problem. Perhaps procedurally generating textures would solve this. Perhaps caching is enough. I'm no ray tracing expert so please correct me if I'm wrong here.

Entropy isn't what it used to be.

Working...