Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Supercomputing Hardware Technology

100 Million-Core Supercomputers Coming By 2018 286

CWmike writes "As amazing as today's supercomputing systems are, they remain primitive and current designs soak up too much power, space and money. And as big as they are today, supercomputers aren't big enough — a key topic for some of the estimated 11,000 people now gathering in Portland, Ore. for the 22nd annual supercomputing conference, SC09, will be the next performance goal: an exascale system. Today, supercomputers are well short of an exascale. The world's fastest system at Oak Ridge National Laboratory, according to the just released Top500 list, is a Cray XT5 system, which has 224,256 processing cores from six-core Opteron chips made by Advanced Micro Devices Inc. (AMD). The Jaguar is capable of a peak performance of 2.3 petaflops. But Jaguar's record is just a blip, a fleeting benchmark. The US Department of Energy has already begun holding workshops on building a system that's 1,000 times more powerful — an exascale system, said Buddy Bland, project director at the Oak Ridge Leadership Computing Facility that includes Jaguar. The exascale systems will be needed for high-resolution climate models, bio energy products and smart grid development as well as fusion energy design. The latter project is now under way in France: the International Thermonuclear Experimental Reactor, which the US is co-developing. They're expected to arrive in 2018 — in line with Moore's Law — which helps to explain the roughly 10-year development period. But the problems involved in reaching exaflop scale go well beyond Moore's Law."
This discussion has been archived. No new comments can be posted.

100 Million-Core Supercomputers Coming By 2018

Comments Filter:
  • by 140Mandak262Jamuna ( 970587 ) on Monday November 16, 2009 @03:13PM (#30119614) Journal
    The programming techniques and mathematical formulations needed to take advantage of such very large number of processors continue to be the main stumbling blocks. Some kind of simulations parallelize naturally. Time accurate fulid flow simulation for example is very easy to parallelize and technically you can devote a processor for each element and do time marching nicely. But not all physics problems are amenable to parallelization. Further even in the nice cases like fluid flow, if one tries to do solution adaptive meshing, no uniform grids etc, the time step slows down so much the simulation takes too long even on a 100 million processor machine.

    The CFL condition that limits the maximum time step one can take shows no sign of relenting. Score has been Courant (the C in CFL) 1, Moore 0 for the last three decades.

  • The Jaguar? (Score:3, Interesting)

    by Yvan256 ( 722131 ) on Monday November 16, 2009 @03:21PM (#30119748) Homepage Journal

    The Jaguar is capable of a peak performance of 2.3 petaflops.

    The first Jaguar [wikipedia.org] was a single megaflop.

  • by mcrbids ( 148650 ) on Monday November 16, 2009 @03:22PM (#30119780) Journal

    We're still at the point where unthreaded languages (like PHP) are still viable. For example, we use PHP in a complex, multi-server, multi-core cluster, and it's "share nothing" approach scales quite nicely, in that having more and more users hitting the systemm on separate servers doesn't really cause a problem, since there's virtually no cross-communication going on.

    But there's a scalability limit in what you can do "PER PROCESS". There are some very processor intensive functions that simply take a while to do (such as rendering a 100 page report, then converting to PDF) and there's currently no way to spread the load in PHP beyond a single core.

    At the other extreme, we have almost the same problem - having such a large number of cores that resources commonly shared among threads and processes is really no longer feasible.

    Languages like Erlang have a "shared nothing" approach, but not at the process/thread level, but at the function level. Individual functions within a process are themselves "share nothing" and thus can easily scale across multiple cores, processors, and servers in a networked cluster. (at least, this is the theory)

    So how 'bout it, folks? Where are the benchmarks showing how languages DESIGNED to take advantage of parallel processors and clusters actually scale up in the real world? Is Erlang the cat's meow when discussing systems of this scale?

    I'm not expecting to see my example process (100 page PDF reports) scale up smoothly to 250,000 cores, but I sure would like to see it scale up smoothly to a dozen or two!

  • human brain (Score:4, Interesting)

    by simoncpu was here ( 1601629 ) on Monday November 16, 2009 @03:24PM (#30119830)
    How many cores do we need to simulate a human brain?
  • by mcgrew ( 92797 ) * on Monday November 16, 2009 @03:26PM (#30119870) Homepage Journal

    My cell phone is a supercomputer. At least, it would have been if I'd had it in 1972. Rather then being from the future, he, like me, is from the past and living in this science fiction future when all that fantasy stuff like doors that open by themselves, rockets to space, phones that need no wires and fit in your pocket, computers on your desk, ovens that bake a potato in three minutes without the oven getting hot, flat screen TVs that aren't round at the corners, eye implants that cure nearsightedness, farsightedness, astigmatism and cataracts all at once, etc.

    Back when I was young it didn't seem primitive at all. Looking back, GEES. When you went to the hospital they knocked you out with automotive starting fluid and left scars eight inches wide. These days they say "you're going to sleep now" and you blink and find yourself in the recovery room, feeling no pain or nausea with a tiny scar.

    We are indeed living in primitive times. Back in the 1870s a man quit the Patent office on the grounds that everything useful had already been invented. If you're young enough you're going to see things that you couldn't imagine, or at least couldn't believe possible.

    Sickness, pain, and death. And Star Trek. [slashdot.org]

  • Re:100 Million? (Score:3, Interesting)

    by aztracker1 ( 702135 ) on Monday November 16, 2009 @03:52PM (#30120356) Homepage

    Far more computer science types wind up working with money (base-10) than anything base-2 or base 16.

  • by David Greene ( 463 ) on Monday November 16, 2009 @04:42PM (#30121100)

    Further even in the nice cases like fluid flow, if one tries to do solution adaptive meshing, no uniform grids etc, the time step slows down so much the simulation takes too long even on a 100 million processor machine.

    That's true in general. However, techniques like dynamic scheduling can help. Work stealing algorithms and other tricks will probably become part of the general programming model as we move forward. More and more of this has to be pushed to compilers, runtimes and libraries.

  • by petrus4 ( 213815 ) on Monday November 16, 2009 @09:42PM (#30124784) Homepage Journal

    ...what might happen if we could run a copy of The Sims on a truly massive supercomputer. It would need to be somewhat customised for that particular machine/environment, of course, but I think it could be interesting.

    There were times when I did see something close to genuinely emergent behaviour in the Sims 2, or more specifically, emergent combinations of pre-existing routines. You need to set things up for them in a way which is somewhat out of the box, and definitely not in line with real world human architectural or aesthetic norms, but it can happen.

    Makes me think; if we could run the Sims, or the bots from some currently existing FPS, parallel on a sufficiently large scale, we might eventually start seeing some very interesting results come from it, at least within the contexts of said games.

Work without a vision is slavery, Vision without work is a pipe dream, But vision with work is the hope of the world.

Working...