Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Supercomputing Hardware Technology

Supercomputer Advancement Slows? 86

kgeiger writes "In the Feb. 2011 issue of IEEE Spectrum online, Peter Kogge, an IEEE Fellow and professor of computer science and engineering at the University of Notre Dame, outlines why we won't see exaflops computers soon. To start with, consuming 67 MW (an optimistic estimate) is going to make a lot of heat. He concludes, 'So don't expect to see a supercomputer capable of a quintillion operations per second appear anytime soon. But don't give up hope, either. [...] As long as the problem at hand can be split up into separate parts that can be solved independently, a colossal amount of computing power could be assembled similar to how cloud computing works now. Such a strategy could allow a virtual exaflops supercomputer to emerge. It wouldn't be what DARPA asked for in 2007, but for some tasks, it could serve just fine.'"
This discussion has been archived. No new comments can be posted.

Supercomputer Advancement Slows?

Comments Filter:
  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Friday January 28, 2011 @12:51PM (#35033972)
    Comment removed based on user account deletion
  • by Animats ( 122034 ) on Friday January 28, 2011 @02:07PM (#35035024) Homepage

    What if instead of trying to address everything that way, they break up the computing and move it to the data... so that RAM is tied directly to the logic that would use it.

    It's been tried. See Thinking Machines Corporation [wikipedia.org]. Not many problems will decompose that way, and all the ones that will can be decomposed onto clusters.

    The history of supercomputers is full of weird architectures intended to get around the "von Neumann bottleneck". Hypercubes, SIMD machines, dataflow machines, associative memory machines, perfect shuffle machines, partially-shared-memory machines, non-coherent cache machines - all were tried, and all went to the graveyard of bad supercomputing ideas.

    The two extremes in large-scale computing are clusters of machines interconnected by networks, like server farms and cloud computing, and shared-memory multiprocessors with hardware cache consistency, like almost all current desktops and servers. Everything else, with the notable exception of GPUs, has been a failure. Even the Cell, the most widely deployed non-standard architecture ever, was only used in the PS3, and was more trouble than it was worth.

"Gravitation cannot be held responsible for people falling in love." -- Albert Einstein

Working...