Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Intel Supercomputing Hardware

Intel Squeezes 1.8 TFlops Out of One Processor 168

Jagdeep Poonian writes "It appears as though Intel has been able to squeeze 1.8 TFlops out of one processor and with a power consumption of 62 watts." The AP version of the story is mostly the same; a more technical examination of TeraScale is also available.
This discussion has been archived. No new comments can be posted.

Intel Squeezes 1.8 TFlops Out of One Processor

Comments Filter:
  • Re:Oblig. (Score:5, Interesting)

    by niconorsk ( 787297 ) on Monday February 12, 2007 @10:28AM (#17982300)
    It's quite fun to consider that when the original joke was made, the processing power of that Beowulf cluster would probably been quite close to the processing power of the processor discussed in the article.
  • by DoofusOfDeath ( 636671 ) on Monday February 12, 2007 @10:42AM (#17982462)
    Does this permit the practical use of any truly breakthrough apps?

    Does it suddenly make previously crappy technologies worthwhile? I.e., does image recognition or untrained speech recognition become a mainstream technology with this new processing power?
  • 99% is exagerated (Score:4, Interesting)

    by Anonymous Coward on Monday February 12, 2007 @10:45AM (#17982496)
    The first thing that jumped out at me was the presence of MACs. They are the heart of any DSP. So, this chip is good for computation although not necessarily processing. As other posters have pointed out, this chip could become a very cool GPU. It should also be awesome for encryption and compression. Given that the processor is already an array, it should be a natural for spreadsheets and math programs such as Matlab and Scilab. Having a chip like this in my computer just might obviate the need for a Beowolf cluster. :-)
  • by Frumious Wombat ( 845680 ) on Monday February 12, 2007 @10:58AM (#17982650)
    Atomistic simulations of biomolecules. Chain a bunch of those together, and you begin to simulate systems on realistic time scales. Higher-resolution weather models, or faster and better processing of seismic data for exploration. Same reason that we perked up when the R8000 came out with its (for the time) aggressive FPU. 125 MFlops/proc@75MHz [netlib.org] was nothing to sneeze at 15 years ago. If they can get this chip into production in usable quantities, and if it has the throughput, then they're on to something this time.

    Of course, this could just be a single-chip CM2 [wikimedia.org]; blazingly fast but almost impossible to program.
  • Re:Oblig. (Score:2, Interesting)

    by Anonymous Coward on Monday February 12, 2007 @11:00AM (#17982684)
    It is entirely not true that you could replace today's fastest computer with this kind of technology and get the same performance. These new Intel CPU's are really difficult to program efficiently. You would only get good performance on certain problems sets.
  • by Intron ( 870560 ) on Monday February 12, 2007 @11:01AM (#17982698)
    Realtime, photorealistic animation and speech processing? Too bad AI software still sucks or this could pass a Turing test where you look at someone on a screen and don't know whether they are real or not.
  • by Dr. Spork ( 142693 ) on Monday February 12, 2007 @11:04AM (#17982728)
    When I read about this I didn't get all worked up, since I imagine that it will be almost impossible for realistic applications to keep all 80 cores busy and get the teraflop benefits. But then I read about the possibility of using this for real-time ray tracing, and got very intrigued!

    Ray tracing is embarassingly parallelizable, and while I'm no expert, two terraflops might just be enough calculating power to do a pretty good job at scene rendering, maybe even in real time. To think this performance would be available from a standard 65nm die that uses 65 watts... that really could make a difference to gamers!

  • by Vigile ( 99919 ) on Monday February 12, 2007 @11:14AM (#17982862)
    Yep, that's one of the things that got me excited about it as well. Did you also read this article on ray tracing on the same pcper.com site by a German guy that made a Quake 4 ray tracing engine?

    http://www.pcper.com/article.php?aid=334 [pcper.com]
  • by Anonymous Coward on Monday February 12, 2007 @12:17PM (#17983710)
    This clearly isn't for CPU's. It's for building GPU's and more importantly for intel get a part of the huge growing market demand for general purpose programming on GPU's. We'll have to call them something other than GPU's in 5-10 years as they'll do all sorts of other jobs too.

    IBM saw this coming and went with the Cell, AMD saw this coming and bought ATi, NVidia already has a card that has all these shader units. Intel would be stupid not to respond. They've already admitted a discrete GPU part is on the way (http://www.reghardware.co.uk/2007/01/23/intel_dis crete_gpu_return).

    Only the other day there was a story (either the register or inquirer that's AFAIK has been now deleted...) about their GPU part being a whole chunk of in order x86 parts on a chip. Pieces of the jigsaw are slotting togheter. Makes programming GPGPU stuff easy for many. Intel want to move x86 architecture onto GPU's.

    Ah well, I wonder when we'll get that story confirmed. Intel are clearly up to something... I think we'll know what shortly. All in all it spells trouble for NVidia as being left out of the CPU part of the equation with Intel, AMD and in some respects IBM all with combo's.

    Anon because I've signed way too many NDA's...
  • Re:Oblig. (Score:3, Interesting)

    by PitaBred ( 632671 ) <slashdot@pitabre d . d y n d n s .org> on Monday February 12, 2007 @02:14PM (#17985364) Homepage
    Because it doesn't take special problem sets and programming on the current supercomputers?
  • by petrus4 ( 213815 ) on Tuesday February 13, 2007 @04:50AM (#17994694) Homepage Journal
    ...is a version of the Sims 2 rewritten so that the Sims have a much greater degree of genuine autonomy, and for said version to be run without human intervention (and recorded) for a period of months or years on a multiple TFlop system. If the environment was made a lot more detailed than it is in the retail version of the game, and if the Sims were given somewhat more capacity for learning than what they've currently got, something tells me the results of such an experiment might be extremely interesting, given enough time.
  • by SemanticPhilosopher ( 1063592 ) on Tuesday February 13, 2007 @08:08AM (#17995632)
    Or more like the T9s... So the 32way crossbar switch, with 32 processors that I have working in the garage is coming back into fashion... Now if all the work that we did on interconnect topologies and their performance in networks up to size 1024 nodes might be useful. Hey we might even make something from the book!.... Welcome back to the late '80s Intel - do yourselves a favour - read the literatature - we've done the painful stuff already - you don't need to waste money on the fundemental research - its been done!
  • by Dr. Spork ( 142693 ) on Tuesday February 13, 2007 @09:18PM (#18006392)
    I'd heard about the Quake3 thing somewhere else. It's pretty cool with Quake4. What really impressed me, though, is that when they multiplied the number of polygons in the scene by several orders of magnitude, rendering performance fell only 60% or so. This makes it seem like an increase in processing power will accomodate an expoential improvement in scene detail. This confirms my suspicion that real-time ray tracing is the future of game graphics.

    The fact that ray-traced Quake3 works OK in real time on present (though big - but not specialized) hardware makes me think that Intel's chip might be able to do some impressive real-time ray-tracing already, and a 2012 version of the chip would render nicer scenes through ray-tracing than would conventional GPUs made with 2012 technology.

Today is a good day for information-gathering. Read someone else's mail file.

Working...