Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Supercomputing Hardware Technology

100 Million-Core Supercomputers Coming By 2018 286

CWmike writes "As amazing as today's supercomputing systems are, they remain primitive and current designs soak up too much power, space and money. And as big as they are today, supercomputers aren't big enough — a key topic for some of the estimated 11,000 people now gathering in Portland, Ore. for the 22nd annual supercomputing conference, SC09, will be the next performance goal: an exascale system. Today, supercomputers are well short of an exascale. The world's fastest system at Oak Ridge National Laboratory, according to the just released Top500 list, is a Cray XT5 system, which has 224,256 processing cores from six-core Opteron chips made by Advanced Micro Devices Inc. (AMD). The Jaguar is capable of a peak performance of 2.3 petaflops. But Jaguar's record is just a blip, a fleeting benchmark. The US Department of Energy has already begun holding workshops on building a system that's 1,000 times more powerful — an exascale system, said Buddy Bland, project director at the Oak Ridge Leadership Computing Facility that includes Jaguar. The exascale systems will be needed for high-resolution climate models, bio energy products and smart grid development as well as fusion energy design. The latter project is now under way in France: the International Thermonuclear Experimental Reactor, which the US is co-developing. They're expected to arrive in 2018 — in line with Moore's Law — which helps to explain the roughly 10-year development period. But the problems involved in reaching exaflop scale go well beyond Moore's Law."
This discussion has been archived. No new comments can be posted.

100 Million-Core Supercomputers Coming By 2018

Comments Filter:
  • by Itninja ( 937614 ) on Monday November 16, 2009 @02:03PM (#30119420) Homepage
    Can't we just start calling this a 'supercore' or something? When the numbers get that high it kind of goes beyond what most people can visualize. Like describing how hot the Sun is....let's just says it's "exactly 1 Sun hot".
    • Re: (Score:3, Insightful)

      by MozeeToby ( 1163751 )

      How about 1 million cores being a mega-core. So the proposed supercomputer would be a 100 mega-core computer.

      • by Yvan256 ( 722131 ) on Monday November 16, 2009 @02:18PM (#30119674) Homepage Journal

        Let's just make sure it's 1 000 000 cores and not 1 048 576 cores... let's not make that mistake again.

        • by _KiTA_ ( 241027 ) on Monday November 16, 2009 @02:23PM (#30119786) Homepage

          Let's just make sure it's 1 048 576 cores and not 1 000 000 cores... let's not make that mistake again.

          • Re:100 Million? (Score:4, Insightful)

            by Yvan256 ( 722131 ) on Monday November 16, 2009 @02:24PM (#30119828) Homepage Journal

            Just because CS has been abusing a system for over four decades doesn't make it right.

            • Re:100 Million? (Score:5, Insightful)

              by sexconker ( 1179573 ) on Monday November 16, 2009 @04:33PM (#30121906)

              CS abused nothing.
              KB means 1024 bytes, and it always will.

              KB is not K.
              KB is not stepping on the toes of any SI units.
              SI units are not sacred.
              SI units are not enforceable by law.
              SI units step on their own toes and are ambiguous themselves.

              Anytime you see a b or a B after a K, M, etc. scalar multiplier, you are talking about bits or bytes and are using 1024 instead of 1000. It is not confusing. It is not ambiguous.

              It's the fault of storage device marketers and idiot "engineers" who didn't check their work, made a mistake on some project, and refuse to admit it that the "confusion" exists.

              Furthermore, classical SI scalars are used for measuring - bits are discrete finite quanta - we COUNT them. Would you like a centibyte? TOO FUCKING BAD.

              The scalar of 1000 was chosen out of pure convenience. The scalar 1024 was chosen out of convenience, and was made a power of 2 because of the inherent nature of storage with respect to permutations (how many bits do I need to contain this space at this resolution? how much resolution and space can I get from this many bits?) and because of physical aspects relating to the manufacturing and design of the actual circuits.

              CS has a fucking REASON to use 1024.
              SI does not have a fucking reason to use 1000.

              There is more validity in claiming that all SI units should be switched to 1024 than there is in suggesting KB mean 1024 bytes.

              "But everything written before the change will be ambiguous!!!" yet you SI proponents tried to shove that ibi shit into CS (and failed miserably, thank you) despite the fact that it would cause the same fucking problem ("Does he mean KB or KiB?" "When was it published?" "Uh, Copyright 1999-2009" "Uh...").

              In short, 1024 is correct, 1000 is wrong.

              • Re: (Score:3, Insightful)

                SI has a very good reason. Try converting from mm to km. It is trivial. Now try your number *1024*1024, this clearly isn't as friendly.

                Therefore the reason we use 1000 in SI is because we work in base 10 and it makes the numbers easy.

              • Re: (Score:3, Insightful)

                by Anonymous Coward

                Yes, numbers in computers go from 1, 2, 4, ..., 1024, 2048, 4096, ..., 1048576, ..., etc. Nobody is arguing against that.

                Nobody is debating wether or not computers work in base 2 for some areas (such as RAM, addressing, etc), so stop bringing that up, it's not a valid argument for making a kilo being equal to 1024. Not everything in a computer is base 2, such as the ethernet port being 10/100/1000.

                The problem is exactly that "1024 was chosen out of convenience". A kilo means 1000 and that's all there is to

                • by Hatta ( 162192 )

                  If I say "1 kibibyte" you know it's 1024.

                  I then also know that you're a twit.

                  If I say "1 kilobyte" you can't be sure I mean 1000 or 1024.

                  I know you mean 1024, unless you're a scummy hard disk manufacturer. The only reason to ever use kilobyte to mean 1000 bytes is to mislead people.

              • Re: (Score:3, Insightful)

                by sFurbo ( 1361249 )

                Anytime you see a b or a B after a K, M, etc. scalar multiplier, you are talking about bits or bytes and are using 1024 instead of 1000. It is not confusing. It is not ambiguous.

                So, whenever you have a unit prefixed by M, it means 1000000, except in these two particular case. How is that not confusing and/or ambiguous?

                CS has a fucking REASON to use 1024. SI does not have a fucking reason to use 1000.

                CS have a reason for 1024, but none for wanting it to be called k. SI have a reason for wanting every k to mean 1000. It is fine if you want prefixes which fits the use of one particular field, but don't just take something welldefined and give it a conflicting meaning. That is just asking for trouble, which is exactly what you got.

      • With this many cores, don't they cease being cores, and become more of a smudge?
    • The core? The surface? The corona?

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Because even though the number's "effect" on you diminishes as it goes up, doesn't mean it is still significant. There's a reason Engineers use quantitative instead of qualitative.
      How do you tell the difference between hot and really hot or really really hot?

      Really.

      How about the difference between 10, 20 and 30?

      10

      Which gives you more information?

    • by Yvan256 ( 722131 )

      The definition of "supercomputer" changes as time goes by. Today's cellphones are yesterday's supercomputers.

      • by ari_j ( 90255 )
        Therefore, cell phones on January 2, 2018 will have 100 million processor cores.
    • I suspect that we will end up calling this a heuristically designed processor. Or something similar....
    • by ArbitraryDescriptor ( 1257752 ) on Monday November 16, 2009 @02:30PM (#30119960)
      I am currently accepting investors to help build a one billion core supercomputer to create high resolution climate models that take into account the waste heat from a 100 million core supercomputer making a high resolution climate model.

      (Seriously, how much heat is that thing going to put out?)
      • by blueg3 ( 192743 )

        (Seriously, how much heat is that thing going to put out?)

        As much energy as it consumes. For climate models, though, direct waste heat production is negligible compared to climatological effects (e.g., CO2).

      • Ah Heisenberg you crafty devil!

    • Re: (Score:2, Informative)

      One Million Cores and one Sun hot mentioned in the same post, coincidence? I think not!

    • Well, what was missing from the article summary is that this computer is going to be built using nVidia GPUs, not CPUs for the majority of computing...

      Although really, with the way Fermi is shaping up, it is turning into a very specialized CPU.

    • The Indians already have a word [wikipedia.org] for 10 million cores.

  • by pete-classic ( 75983 ) <hutnick@gmail.com> on Monday November 16, 2009 @02:04PM (#30119430) Homepage Journal

    As amazing as today's supercomputing systems are, they remain primitive

    Wait, what? You lost me. Are you from the future? How can you describe the state of the art as "primitive"?

    -Peter

    • Re: (Score:3, Funny)

      by Yvan256 ( 722131 )

      Forget the president, ask for the winning lottery numbers for the next 20 years!

    • by mcgrew ( 92797 ) * on Monday November 16, 2009 @02:26PM (#30119870) Homepage Journal

      My cell phone is a supercomputer. At least, it would have been if I'd had it in 1972. Rather then being from the future, he, like me, is from the past and living in this science fiction future when all that fantasy stuff like doors that open by themselves, rockets to space, phones that need no wires and fit in your pocket, computers on your desk, ovens that bake a potato in three minutes without the oven getting hot, flat screen TVs that aren't round at the corners, eye implants that cure nearsightedness, farsightedness, astigmatism and cataracts all at once, etc.

      Back when I was young it didn't seem primitive at all. Looking back, GEES. When you went to the hospital they knocked you out with automotive starting fluid and left scars eight inches wide. These days they say "you're going to sleep now" and you blink and find yourself in the recovery room, feeling no pain or nausea with a tiny scar.

      We are indeed living in primitive times. Back in the 1870s a man quit the Patent office on the grounds that everything useful had already been invented. If you're young enough you're going to see things that you couldn't imagine, or at least couldn't believe possible.

      Sickness, pain, and death. And Star Trek. [slashdot.org]

      • No nausea? WTF, I've gone through two surgeries where they put me out in the past 5 years, I was nauseous after both of them.

        • by mcgrew ( 92797 ) *

          I guess I was lucky, or you were unlucky. Or my anesthesiologist was better than yours. I've had a hemorrhoidectomy, cataract surgery, and a vitrectomey in the last ten years and it was all painless and nausea-free. Well, the hemorroid surgery was painful the next day, and the vitrectomy was hell on my spinal arthritis, but that's only because I couldn't raise my head except ten minutes an hour for two weeks. Fifty years ago the detached retina would have completely blinded an eye, and cataract surgery wasn

    • by Z00L00K ( 682162 )

      You can still predict that some tech is primitive.

      When a computer develops a mind of it's own in a logical manner it's starting to reach the human level and we can start to discuss if it's primitive or not. If it starts to reproduce on it's own it's time to be careful.

      • Sarah Conner?

      • Re: (Score:3, Informative)

        by turgid ( 580780 )

        When a computer develops a mind of it's own in a logical manner it's starting to reach the human level and we can start to discuss if it's primitive or not. If it starts to reproduce on it's own it's time to be careful.

        That's not directly related to computing power per se. A computer 100 000 000 times as powerful as today's, running today's software will still not have developed a mind of its own. It'll just be very, very fast indeed.

    • by David Greene ( 463 ) on Monday November 16, 2009 @02:38PM (#30120076)

      Wait, what? You lost me. Are you from the future? How can you describe the state of the art as "primitive"?

      Pretty easily, actually. There are lots of problems to solve, not the least of which is programming model. We're still basically using MPI to drive these machines. That will not cut it on a 100-million core machine where each socket has on the order of 100 cores. MPI can very easily be described as "primitive," as well as "clunky," "tedious" and "a pain in the ***."

      How do we checkpoint a million-core program? How do we debug a million-core program? We are in the infancy of computing.

      • Re: (Score:3, Funny)

        by vtcodger ( 957785 )

        ***How do we debug a million-core program?***

        What is this "debugging" thing you speak of? If you are asking how we will test software for a million core system, we'll do it the same way we always have. We'll get a trivial test case to run once, then we'll ship.

    • Check Wiki about "thinking machines", "transputer" and if you have more than 1 CPU/Core, launch a game and see if all cores used effectively without needing massive additional work from game publisher.

      Technology is primitive, even a billion processor machine doesn't save it from being primitive. It is the software at least.

    • by Jekler ( 626699 )

      I think of our supercomputing systems as primitive in an analogous way as cavemen wouldn't end up with a rocket thruster if they just throw enough logs on a fire.

      Without more advanced software designs and some type of revolutionary system architecture, more cores ends up only being slightly better than linear progression. They're primitive in that our supercomputers are seldom more than the sum of their parts.

  • by 140Mandak262Jamuna ( 970587 ) on Monday November 16, 2009 @02:13PM (#30119614) Journal
    The programming techniques and mathematical formulations needed to take advantage of such very large number of processors continue to be the main stumbling blocks. Some kind of simulations parallelize naturally. Time accurate fulid flow simulation for example is very easy to parallelize and technically you can devote a processor for each element and do time marching nicely. But not all physics problems are amenable to parallelization. Further even in the nice cases like fluid flow, if one tries to do solution adaptive meshing, no uniform grids etc, the time step slows down so much the simulation takes too long even on a 100 million processor machine.

    The CFL condition that limits the maximum time step one can take shows no sign of relenting. Score has been Courant (the C in CFL) 1, Moore 0 for the last three decades.

    • by radtea ( 464814 )

      The CFL condition that limits the maximum time step one can take shows no sign of relenting. Score has been Courant (the C in CFL) 1, Moore 0 for the last three decades.

      Yeah, I always get a laugh out of people who think that we're ever going to beat down turbulent flow with higher resolution. It's vortices all the way down, and no matter how clever your implicit scheme you still have to be able to propogate information through the grid at less than the speed of sound to prevent numerical shock waves from b

    • Re: (Score:2, Interesting)

      by David Greene ( 463 )

      Further even in the nice cases like fluid flow, if one tries to do solution adaptive meshing, no uniform grids etc, the time step slows down so much the simulation takes too long even on a 100 million processor machine.

      That's true in general. However, techniques like dynamic scheduling can help. Work stealing algorithms and other tricks will probably become part of the general programming model as we move forward. More and more of this has to be pushed to compilers, runtimes and libraries.

  • by wondi ( 1679728 ) on Monday November 16, 2009 @02:14PM (#30119624)
    All this effort at creating parallel computing ends up solving very few problems. HPC has been struggling with parallelism for decades, and no easy solutions found yet. Note that these computers are aimed at solving a particular problem (e.g. modeling weather) and not at being a vehicle to quickly solve any problem. When the comparable multi-processing capacity is in your cell phone, what are you going to do with it?
    • Re: (Score:2, Funny)

      When the comparable multi-processing capacity is in your cell phone, what are you going to do with it?

      Stream high definition porn... duh.

      • Re: (Score:3, Funny)

        by shmlco ( 594907 )

        Stream it? With that much processing power it should be able to create it on the spot: "Computer, let's start today's scenario with Angelina Jolie surrounded by...."

    • Note that these computers are aimed at solving a particular problem (e.g. modeling weather) and not at being a vehicle to quickly solve any problem.

      That's not entirely accurate. HPC systems are designed to solve a class of problems. That's not the same thing as a "particular" problem. Jaguar has, in fact, solved many different problems, including fluid flow, weather, nuclear fusion and supernova modeling. It's not going to run Word any faster than your PC but that's not what you buy a supercomputer to do.

      • by Again ( 1351325 ) on Monday November 16, 2009 @02:45PM (#30120204)

        That's not entirely accurate. HPC systems are designed to solve a class of problems. That's not the same thing as a "particular" problem. Jaguar has, in fact, solved many different problems, including fluid flow, weather, nuclear fusion and supernova modeling. It's not going to run Word any faster than your PC but that's not what you buy a supercomputer to do.

        So you're saying that OpenOffice would still take forever to start.

        • Yes. So by extending it to a million core machine, OpenOffice would take million x forever* to load if one instance is opened per core.

          *Forever = two seconds after a mouse click.

    • Parallel computing is great for solving NP-Complete problems. If you have enough cores for every possible solution you can have all possible paths process at the same time and compare the results.
      • by 1729 ( 581437 )

        Parallel computing is great for solving NP-Complete problems. If you have enough cores for every possible solution you can have all possible paths process at the same time and compare the results.

        That's tough to manage when the number possible paths grows exponentially with respect to the input size.

    • I'm always wary of making an infamous "50 MB of memory is all you'll ever need" type of claim, so I like to believe that we'll figure out how to use greater processing power by the time it gets here. We haven't had too much trouble with that so far. As far as actual use, if we ever get products like Morph (http://www.youtube.com/watch?v=IX-gTobCJHs [youtube.com]), there might be a need for massively parallel processing. At the very least, such computing power would likely be needed to make such products.
  • by 140Mandak262Jamuna ( 970587 ) on Monday November 16, 2009 @02:15PM (#30119638) Journal
    Technically, shouldn't 640K processors be enough for every one?
  • Oink, oink (Score:2, Insightful)

    by Animats ( 122034 )

    The exascale systems will be needed for high-resolution climate models, bio energy products and smart grid development as well as fusion energy design.

    Sounds like a pork program. What are "bio energy products", anyway. Ethanol? Supercomputer proposals seem to come with whatever buzzword is hot this year.

    It's striking how few supercomputers are sold to commercial companies. Even the military doesn't use them much any more.

    • Re: (Score:2, Insightful)

      by David Greene ( 463 )

      Sounds like a pork program. What are "bio energy products", anyway. Ethanol?

      I'm no expert on this, but I would guess the idea is to use the processing power to model different kinds of molecular manipulation to see what kind of energy density we can get out of manufactured biological goo. Combustion modeling is a common problem solved by HPC systems. Or maybe we can expore how to use bacteria created to process waste and give off energy as a byproduct. I don't know, the possibilities are endless.

      It's striking how few supercomputers are sold to commercial companies. Even the military doesn't use them much any more.

      Define "supercomputer." Sony uses them. So does Boeing. The auto industry uses

    • Isn't quantum computing supposed to solve all these problems without need for a zillion cores? Or have a latched onto the wrong panacea here?

    • These systems cost a lot- it might take buzzwords to get politicians to buy into them and fund these sorts of projects. Even so, many energy projects are important to pour more research into, even if such projects often get watered down to a single misleading buzzword.
    • It's striking how few supercomputers are sold to commercial companies.

      I'm sure that in the early 20th century somebody was saying, "It's striking how few airplanes are sold to commercial companies," and going on to draw the conclusion that government spending on aircraft was a pork program. (And I'm sure there was some pure pork spending involved, but 100 years later, the overall effect of that kind of spending lets us use airplanes for things that would have been unthinkably expensive when people started spending money on them).

      Today's supercomputer is the next decade's mid

  • It's interesting that 4 of top 5 supercomputers are running AMD, while 402 of the Top500 are running Intel.

    What's the cause of this? Value? Energy-saving? Performance?

    • Re:AMD vs Intel (Score:5, Informative)

      by Eharley ( 214725 ) on Monday November 16, 2009 @02:25PM (#30119854)

      I believe AMD was the first mass market CPU to include an on-board memory controller.

    • Re:AMD vs Intel (Score:4, Informative)

      by confused one ( 671304 ) on Monday November 16, 2009 @02:57PM (#30120464)
      I'd be guessing but here are three possible reasons AMD might be in that place:
      1.) Value, ie. lower cost per processor
      2.) Opteron has built in straight forward 4-way and 8-way multiprocessor connectivity, Xeon was limited to 2-way connectivity without extra bridge hardware, until recently.
      3.) Opteron has higher memory bandwidth than P4 or Core 2 arch.
    • Re: (Score:2, Informative)

      by hattig ( 47930 )

      Easy CPU upgrades because the socket interface stay the same.

      Some of those supoercomputers might have gone from dual-core 2GHz Opteron K8s through quad-core Opteron K10s to these new sexa-core Opteron K10.5s with only the need to change the CPUs and the memory.

      Or possibly if the upgrades were done at a board level, HyperTransport has remained compatible, so your new board of 24 cores just slots into your expensive, custom, HyperTransport-based back-end. To switch to Intel would require designing a QPI-based

  • by 140Mandak262Jamuna ( 970587 ) on Monday November 16, 2009 @02:19PM (#30119716) Journal
    We know what answer it is going to give. 42. Save the money.
    • Re: (Score:3, Funny)

      by thewils ( 463314 )

      That's the answer though. They're building this thing to find out what the question was.

  • The Jaguar? (Score:3, Interesting)

    by Yvan256 ( 722131 ) on Monday November 16, 2009 @02:21PM (#30119748) Homepage Journal

    The Jaguar is capable of a peak performance of 2.3 petaflops.

    The first Jaguar [wikipedia.org] was a single megaflop.

    • Re: (Score:3, Funny)

      But did it leak oil?

      • Re: (Score:3, Insightful)

        by Yvan256 ( 722131 )

        Well, okay, the second Jaguar (cue reply about the Atari Jaguar not being the second commercial product called Jaguar).

        Also, the person who modded my post above "interesting" is either on crack or I was right (by luck) about the Jaguar having 1 megaflop of computing power.

        My post, however, was that the Atari Jaguar was a mega-flop, i.e. its sales were abysmal, support non-existent, etc.

  • Maybe this thing will have enough power to run Windows by 2018??

  • human brain (Score:4, Interesting)

    by simoncpu was here ( 1601629 ) on Monday November 16, 2009 @02:24PM (#30119830)
    How many cores do we need to simulate a human brain?
    • Define "simulate" in this context. Processing power? Creativity? Originality? Ingenuity? I didn't think any number of cores could "cause" creativity... aside from a "brute force" method. Try-every-possibility-and-see-if-one-works.
    • by ari_j ( 90255 )
      Which human brain? I can simulate the rational thought processes of 90% of humans with one vacuum tube.
  • Is this going to be the new processor requirement for running Flash in a web browser?

  • I'm still waiting for that 10GHz Pentium Intel promised for 2004.
  • Come on, that's just silly. I can understand why we might a few million-core supercomputers, but who would need 100 of them?

  • Scaling the number of cores to 100 Million by 2018: A side effect of Moore's Law.

    Low latency, high bandwidth interconnect that can mesh 100 Million cores: The Next Big Problem in computer architecture.

  • Can I just say... FUCK YES. Thank you!

    As someone who grew up in the Portland (Maine) area it annoys me to no end when people talk about things in "Portland" and neglect to disambiguate - especially when they're talking about the other Portland. :)

  • I take it THIS is a machine that might run Vista well. Too late SP3 aka Windows 7 is out.

  • Probably will need a fusion plant to power and cool the thing. But still sounds awesome. They briefly mention data/memory flow issues, but don't really address it. It is getting to the point where data flow will be as important as processing power, especially as you have escalating processors. You can run as many operations as you want, but if it can't be delivered somewhere useful, then they are wasted. I am also very interested on how the overhead will be managed when this many processors are involve

  • Department of energy?

    Mapping weather systems?

    Cracking high bit encryption schemes? Listening to every phone call happening on the planet and mapping social patterns?

    BORING!

    No, I want to see a 100 million core supercomputer render one of those 3D "Mandelbulbs" [skytopia.com] and let me do some real-time exploring with a VR helmet.

    Now THAT would be a worthy use for such resources!

    That and being able to grow virtual beings from DNA samples.

    -FL

  • when these exascale systems start asking questions and/or making demands?
  • by petrus4 ( 213815 ) on Monday November 16, 2009 @08:42PM (#30124784) Homepage Journal

    ...what might happen if we could run a copy of The Sims on a truly massive supercomputer. It would need to be somewhat customised for that particular machine/environment, of course, but I think it could be interesting.

    There were times when I did see something close to genuinely emergent behaviour in the Sims 2, or more specifically, emergent combinations of pre-existing routines. You need to set things up for them in a way which is somewhat out of the box, and definitely not in line with real world human architectural or aesthetic norms, but it can happen.

    Makes me think; if we could run the Sims, or the bots from some currently existing FPS, parallel on a sufficiently large scale, we might eventually start seeing some very interesting results come from it, at least within the contexts of said games.

It's currently a problem of access to gigabits through punybaud. -- J. C. R. Licklider

Working...