Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Supercomputing Hardware Technology

Supercomputer Advancement Slows? 86

kgeiger writes "In the Feb. 2011 issue of IEEE Spectrum online, Peter Kogge, an IEEE Fellow and professor of computer science and engineering at the University of Notre Dame, outlines why we won't see exaflops computers soon. To start with, consuming 67 MW (an optimistic estimate) is going to make a lot of heat. He concludes, 'So don't expect to see a supercomputer capable of a quintillion operations per second appear anytime soon. But don't give up hope, either. [...] As long as the problem at hand can be split up into separate parts that can be solved independently, a colossal amount of computing power could be assembled similar to how cloud computing works now. Such a strategy could allow a virtual exaflops supercomputer to emerge. It wouldn't be what DARPA asked for in 2007, but for some tasks, it could serve just fine.'"
This discussion has been archived. No new comments can be posted.

Supercomputer Advancement Slows?

Comments Filter:
  • by ceoyoyo ( 59147 ) on Friday January 28, 2011 @01:17PM (#35034290)

    Because nobody uses a real supercomputer for that kind of work. It's much cheaper to buy some processing from Amazon or use a loosely coupled cluster, or write an @Home style app.

    Supercomputers are used for tasks where fast communication between processors is important, and distributed systems don't work for these tasks.

    So the answer to your question is that tasks that are appropriate for distributed computing are already done that way (and when lots of people are willing to volunteer, why would they pay you?).

  • by tarpitcod ( 822436 ) on Friday January 28, 2011 @01:28PM (#35034432)

    These modern machines which consist of zillions of cores attached over very low bandwidth and high latency link are really not supercomputers for a huge class of applications. Unless your application exhibits extreme memory locality and hardly any interconnect bandwidth / can tolerate long latencies.

    The current crop of machines is driven mostly by marketing folks and not by people who really want to improve the core physics like Cray used to.

    BANDWIDTH COSTS MONEY, LATENCY IS FOREVER

    Take any of these zillion dollar plies of CPU's and just try doing this:
    for ( x=0; x .lt. bounds; ++x )
    {
            humungousMemoryStructure [ x ] = humungousMemoryStructure1 [ x ] * humungousMemoryStructure2 [ randomAddress ] + humungousMemoryStructure3 [ anotherMostlyRandomAddress ] ;
    }

    It'll suck eggs. You'd be better off with a single liquid nitrogen cooled GaAs/ECL processor surrounded by the fastest memory you can get your hands on all packed into the smallest place you can and cooled with LN or LHe.

    Half the problem is that everyone measures performance for publicity with LINPACK MFLOPS. It's a horrible metric.

    If you really want to build a great new supercomputer get a (smallish) bunch of smart people together like Cray did, and focus on improving the core issues. Instead of spending all your erfforts on hiding latency, tackle it head on. Figure out how to build a fast processor and cool it. Figure out how to surround it with memory.

    Yes,

    Customers will still use commodity MPP machines for the stuff that parallelizes.
    Customers will still hire mathematicians, and have them look at ways to Map things that seem inherently non local into spaces that are local.
    Customers who have money and the mathematicians couldn't help will need your company and your GaAs/ECL or LHe cooled fastest SCALAR / Short Vector box in the world.

interlard - vt., to intersperse; diversify -- Webster's New World Dictionary Of The American Language

Working...