Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Supercomputing Hardware Technology

100 Million-Core Supercomputers Coming By 2018 286

CWmike writes "As amazing as today's supercomputing systems are, they remain primitive and current designs soak up too much power, space and money. And as big as they are today, supercomputers aren't big enough — a key topic for some of the estimated 11,000 people now gathering in Portland, Ore. for the 22nd annual supercomputing conference, SC09, will be the next performance goal: an exascale system. Today, supercomputers are well short of an exascale. The world's fastest system at Oak Ridge National Laboratory, according to the just released Top500 list, is a Cray XT5 system, which has 224,256 processing cores from six-core Opteron chips made by Advanced Micro Devices Inc. (AMD). The Jaguar is capable of a peak performance of 2.3 petaflops. But Jaguar's record is just a blip, a fleeting benchmark. The US Department of Energy has already begun holding workshops on building a system that's 1,000 times more powerful — an exascale system, said Buddy Bland, project director at the Oak Ridge Leadership Computing Facility that includes Jaguar. The exascale systems will be needed for high-resolution climate models, bio energy products and smart grid development as well as fusion energy design. The latter project is now under way in France: the International Thermonuclear Experimental Reactor, which the US is co-developing. They're expected to arrive in 2018 — in line with Moore's Law — which helps to explain the roughly 10-year development period. But the problems involved in reaching exaflop scale go well beyond Moore's Law."
This discussion has been archived. No new comments can be posted.

100 Million-Core Supercomputers Coming By 2018

Comments Filter:
  • by pete-classic ( 75983 ) <hutnick@gmail.com> on Monday November 16, 2009 @03:04PM (#30119430) Homepage Journal

    As amazing as today's supercomputing systems are, they remain primitive

    Wait, what? You lost me. Are you from the future? How can you describe the state of the art as "primitive"?

    -Peter

  • Re:100 Million? (Score:3, Insightful)

    by MozeeToby ( 1163751 ) on Monday November 16, 2009 @03:11PM (#30119550)

    How about 1 million cores being a mega-core. So the proposed supercomputer would be a 100 mega-core computer.

  • Re:100 Million? (Score:2, Insightful)

    by Anonymous Coward on Monday November 16, 2009 @03:13PM (#30119604)

    Because even though the number's "effect" on you diminishes as it goes up, doesn't mean it is still significant. There's a reason Engineers use quantitative instead of qualitative.
    How do you tell the difference between hot and really hot or really really hot?

    Really.

    How about the difference between 10, 20 and 30?

    10

    Which gives you more information?

  • by wondi ( 1679728 ) on Monday November 16, 2009 @03:14PM (#30119624)
    All this effort at creating parallel computing ends up solving very few problems. HPC has been struggling with parallelism for decades, and no easy solutions found yet. Note that these computers are aimed at solving a particular problem (e.g. modeling weather) and not at being a vehicle to quickly solve any problem. When the comparable multi-processing capacity is in your cell phone, what are you going to do with it?
  • Oink, oink (Score:2, Insightful)

    by Animats ( 122034 ) on Monday November 16, 2009 @03:19PM (#30119698) Homepage

    The exascale systems will be needed for high-resolution climate models, bio energy products and smart grid development as well as fusion energy design.

    Sounds like a pork program. What are "bio energy products", anyway. Ethanol? Supercomputer proposals seem to come with whatever buzzword is hot this year.

    It's striking how few supercomputers are sold to commercial companies. Even the military doesn't use them much any more.

  • Re:100 Million? (Score:4, Insightful)

    by Yvan256 ( 722131 ) on Monday November 16, 2009 @03:24PM (#30119828) Homepage Journal

    Just because CS has been abusing a system for over four decades doesn't make it right.

  • by David Greene ( 463 ) on Monday November 16, 2009 @03:38PM (#30120076)

    Wait, what? You lost me. Are you from the future? How can you describe the state of the art as "primitive"?

    Pretty easily, actually. There are lots of problems to solve, not the least of which is programming model. We're still basically using MPI to drive these machines. That will not cut it on a 100-million core machine where each socket has on the order of 100 cores. MPI can very easily be described as "primitive," as well as "clunky," "tedious" and "a pain in the ***."

    How do we checkpoint a million-core program? How do we debug a million-core program? We are in the infancy of computing.

  • Re:Oink, oink (Score:2, Insightful)

    by David Greene ( 463 ) on Monday November 16, 2009 @03:45PM (#30120206)

    Sounds like a pork program. What are "bio energy products", anyway. Ethanol?

    I'm no expert on this, but I would guess the idea is to use the processing power to model different kinds of molecular manipulation to see what kind of energy density we can get out of manufactured biological goo. Combustion modeling is a common problem solved by HPC systems. Or maybe we can expore how to use bacteria created to process waste and give off energy as a byproduct. I don't know, the possibilities are endless.

    It's striking how few supercomputers are sold to commercial companies. Even the military doesn't use them much any more.

    Define "supercomputer." Sony uses them. So does Boeing. The auto industry uses clusters to model crashes, but I believe that's more limited by the design of the off-the-shelf software than anything. They could certainly run on supercomputer-class machines if the vendors ported them.

    And the military uses them a lot. Much of the DOE research done on these machines is probably defense-driven.

  • Re:100 Million? (Score:4, Insightful)

    by JesseMcDonald ( 536341 ) on Monday November 16, 2009 @04:58PM (#30121344) Homepage

    You use SI prefixes with SI units. The 'byte' is not an SI unit; it's not even the most basic representation, being a group of eight bits. If you insist on using base-ten units in combination with bytes then you're essentially arguing for layering a base-ten system on top of a base-two one.

    So far as I know there is no designated SI unit for information. Following the pattern of the other SI units, however, the best choice would be the bit. If you want base-ten measurements, then, you should use "kilobit", "megabit", etc., which unambiguously use the SI prefixes, official unit or not. The non-SI term "megabyte" will never unambiguously mean "10^6 bytes", and trying to make it so just renders the term useless for any purpose requiring precision.

  • Re:The Jaguar? (Score:3, Insightful)

    by Yvan256 ( 722131 ) on Monday November 16, 2009 @05:21PM (#30121648) Homepage Journal

    Well, okay, the second Jaguar (cue reply about the Atari Jaguar not being the second commercial product called Jaguar).

    Also, the person who modded my post above "interesting" is either on crack or I was right (by luck) about the Jaguar having 1 megaflop of computing power.

    My post, however, was that the Atari Jaguar was a mega-flop, i.e. its sales were abysmal, support non-existent, etc.

  • Re:100 Million? (Score:5, Insightful)

    by sexconker ( 1179573 ) on Monday November 16, 2009 @05:33PM (#30121906)

    CS abused nothing.
    KB means 1024 bytes, and it always will.

    KB is not K.
    KB is not stepping on the toes of any SI units.
    SI units are not sacred.
    SI units are not enforceable by law.
    SI units step on their own toes and are ambiguous themselves.

    Anytime you see a b or a B after a K, M, etc. scalar multiplier, you are talking about bits or bytes and are using 1024 instead of 1000. It is not confusing. It is not ambiguous.

    It's the fault of storage device marketers and idiot "engineers" who didn't check their work, made a mistake on some project, and refuse to admit it that the "confusion" exists.

    Furthermore, classical SI scalars are used for measuring - bits are discrete finite quanta - we COUNT them. Would you like a centibyte? TOO FUCKING BAD.

    The scalar of 1000 was chosen out of pure convenience. The scalar 1024 was chosen out of convenience, and was made a power of 2 because of the inherent nature of storage with respect to permutations (how many bits do I need to contain this space at this resolution? how much resolution and space can I get from this many bits?) and because of physical aspects relating to the manufacturing and design of the actual circuits.

    CS has a fucking REASON to use 1024.
    SI does not have a fucking reason to use 1000.

    There is more validity in claiming that all SI units should be switched to 1024 than there is in suggesting KB mean 1024 bytes.

    "But everything written before the change will be ambiguous!!!" yet you SI proponents tried to shove that ibi shit into CS (and failed miserably, thank you) despite the fact that it would cause the same fucking problem ("Does he mean KB or KiB?" "When was it published?" "Uh, Copyright 1999-2009" "Uh...").

    In short, 1024 is correct, 1000 is wrong.

  • Re:100 Million? (Score:3, Insightful)

    by Quantumstate ( 1295210 ) on Monday November 16, 2009 @05:55PM (#30122316)

    SI has a very good reason. Try converting from mm to km. It is trivial. Now try your number *1024*1024, this clearly isn't as friendly.

    Therefore the reason we use 1000 in SI is because we work in base 10 and it makes the numbers easy.

  • Re:100 Million? (Score:3, Insightful)

    by Anonymous Coward on Monday November 16, 2009 @06:08PM (#30122560)

    Yes, numbers in computers go from 1, 2, 4, ..., 1024, 2048, 4096, ..., 1048576, ..., etc. Nobody is arguing against that.

    Nobody is debating wether or not computers work in base 2 for some areas (such as RAM, addressing, etc), so stop bringing that up, it's not a valid argument for making a kilo being equal to 1024. Not everything in a computer is base 2, such as the ethernet port being 10/100/1000.

    The problem is exactly that "1024 was chosen out of convenience". A kilo means 1000 and that's all there is to it, you cannot change facts.

    As for "causing the same fucking problem", we're already there. Fixing it now would mean that at least from this point forward we'd know for sure what people meant, that's why the new units were proposed. If I say "1 kibibyte" you know it's 1024. If I say "1 kilobyte" you can't be sure I mean 1000 or 1024.

    CS always have such stupid problems. If I give you the date "10/11/12", what the hell does it mean? First of all there's the Y2K problem with old data, second you have the MM/DD/YY vs DD/MM/YY vs YY/MM/DD problem (and ISO 8601 fixes this problem beautifully, YYYY-MM-DD).

    Just because there is old, set ways to do something doesn't mean it's the right way.

  • Re:100 Million? (Score:3, Insightful)

    by sFurbo ( 1361249 ) on Tuesday November 17, 2009 @03:35AM (#30126690)

    Anytime you see a b or a B after a K, M, etc. scalar multiplier, you are talking about bits or bytes and are using 1024 instead of 1000. It is not confusing. It is not ambiguous.

    So, whenever you have a unit prefixed by M, it means 1000000, except in these two particular case. How is that not confusing and/or ambiguous?

    CS has a fucking REASON to use 1024. SI does not have a fucking reason to use 1000.

    CS have a reason for 1024, but none for wanting it to be called k. SI have a reason for wanting every k to mean 1000. It is fine if you want prefixes which fits the use of one particular field, but don't just take something welldefined and give it a conflicting meaning. That is just asking for trouble, which is exactly what you got.

"I've seen it. It's rubbish." -- Marvin the Paranoid Android

Working...