Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Silicon Graphics Software Hardware Science Linux

SGI & NASA Build World's Fastest Supercomputer 417

GarethSwan writes "SGI and NASA have just rolled-out the new world number one fastest supercomputer. Its performance test (LINPACK) result of 42.7 teraflops easily outclasses the previous mark set by Japan's Earth Simulator of 35.86 teraflops AND that set by IBM's new BlueGene/L experiment of 36.01 teraflops. What's even more awesome is that each of the 20 512-processor systems run a single Linux image, AND Columbia was installed in only 15 weeks. Imagine having your own 20-machine cluster?"
This discussion has been archived. No new comments can be posted.

SGI & NASA Build World's Fastest Supercomputer

Comments Filter:
  • by khayman80 ( 824400 ) on Tuesday October 26, 2004 @11:02PM (#10638268) Homepage Journal
    Well, maybe what makes the weather models inaccurate is the grid size of the simulations. If you try to model a physical system with a finite-element type of approach and set the gridsize so large that it glosses over important dynamical processes, it won't be accurate.

    But if you can decrease the grid size by throwing more teraflops at the problem, maybe we'll find that our models are accurate after all?

  • by chriguhose ( 676441 ) on Tuesday October 26, 2004 @11:07PM (#10638299)
    I'm not an expert on this, but your statement is in my opinion not completly true. Weather forecasting is a little bit like playing chess. One does have a lot of different path to take to find the best solution. Increased computing power allows for "deeper" searches and increases accuracy. My guess is that more accuracy requires exponentially more computing power. Comparing earth simulator to colombia makes me wonder how much accuracy has increased in this particular case.
  • Cost (Score:5, Interesting)

    by MrMartini ( 824959 ) on Tuesday October 26, 2004 @11:07PM (#10638302)
    Does anyone know how much this system cost? It would be interesting to see how good of a teraflop per million dollar ratio they achieved.

    For example, I know the Virginia Tech cluster (1,100 Apple Xserve G5 dual 2.3Ghz boxes) cost just under $6 million, runs at a bit over 12 teraflops, so it gets a bit over 2 teraflops per million dollars.

    Other high-ranking clusters would be interesting to evaluate in terms of teraflops per million dollars, if anyone knows any.
  • 70.93 TeraFLOPs (Score:5, Interesting)

    by chessnotation ( 601394 ) on Tuesday October 26, 2004 @11:20PM (#10638382)
    Seti@home is currently reporting 70.93 TeraFLOPs/sec. It would be Number One if the list were a bit more inclusive.
  • by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Tuesday October 26, 2004 @11:21PM (#10638393) Homepage Journal
    There it talks of a third run, at 61 teraflops, slightly over the estimated 60 teraflops predicted.


    Ok, so we have Linux doing tens of teraflops in processing, FreeBSD doing tens of petabits in networking, ... What other records can Open Source smash wide open?

  • by anon mouse-cow-aard ( 443646 ) on Tuesday October 26, 2004 @11:24PM (#10638412) Journal
    uhm... Well 2560 motherboards, 'cause their quad-cpu... Altix is the SGI C-bricks that used were built to house 4 IA64 cpu's per brick. otoh... no... really it really is 20 machines with 512 processors each, because the memory is globally shared (all processors have access to all the memory, albeit at different latency and performance: NUMA (Non Uniform Memory Access). and a single linux kernel is running on the whole thing.
  • by kst ( 168867 ) on Tuesday October 26, 2004 @11:28PM (#10638440)
    Why does it take so long to build a super computer ...
    It doesn't. [rocksclusters.org]
  • Linux #1 (Score:5, Interesting)

    by Doc Ruby ( 173196 ) on Tuesday October 26, 2004 @11:57PM (#10638631) Homepage Journal
    The most amazing part of this development is that the fastest computer in the world runs Linux . All these TFLOPS increases are really evolutionary, incremental. That the OS is the popular, yet largely underground open source kernel is very encouraging for NASA, SGI, Linux, Linux developers and users, OSS, and nerds in general. Congratulations, team!
  • by swordgeek ( 112599 ) on Wednesday October 27, 2004 @12:00AM (#10638645) Journal
    Curiously enough, we were talking about the future of computing at lunch today.

    There was a time when different computers ran on different processors, and supported different OSes. Now what's happening? Itanic and Opteron running Linux seem to be the only growth players in the market; and the supercomputer world is completely dominated by throwing more processors together. Is there no room for substantial architectural changes? Have we hit the merging point of different designs?

    Just some questions. Although it's not easy, I'm less excited by a supercomputer with 10k processors than I would be by one containing as few as 64.
  • by Mulletproof ( 513805 ) on Wednesday October 27, 2004 @12:00AM (#10638646) Homepage Journal
    "What other records can Open Source smash wide open?"

    Mmmm, home consumer usage, maybe?? HA! What was I thinking!?
  • Re:Photos of System (Score:5, Interesting)

    by cnkeller ( 181482 ) <cnkeller@[ ]il.com ['gma' in gap]> on Wednesday October 27, 2004 @12:10AM (#10638700) Homepage
    After reading the article I was curious as to how much room 10K or so processors take up.

    I don't have a square footage number, but it's the overwhelming majority of the server floor. We had to "clear the floor" earlier this summer to make room.

  • by HermesHuang ( 606596 ) on Wednesday October 27, 2004 @01:16AM (#10639072)
    The answer here is "complexity". I do some scientific computing (have done chemistry, then materials science, now doing photonic devices) and there's always more you want to be able to consider. Of course, the best I've used is an 8-processor SGI machine (although that one was a bit old - I think the 2-processor opteron system I'm using now is actually better). But especially with the materials studies, ideally we wanted to do everything with full quantum-mechanical calculations. which turns into gigantic matrices, even for a system of 100 atoms or so. And even then we put strict limits on what orbitals we consider and all that good stuff.

    Slightly more concrete example - right now with my photonics simulations (finite element) on my dual-opteron rig the max I can handle is about 180,000 elements (which means a (4*180000)x(4*180000) matrix with complex elements needs to be diagonalized, among other things), and it takes about half an hour for a standing-wave calculation. To do any time propogation, repeat same calculation in picosecond increments. And with the gridding I can do, for a 100 micron disc resonator in 2-D I have to use light at about 40 microns. To go to the 320nm wavelength these resonators are operating at, I'd need roughly 2 orders of magnitude more memory. There's also the time factor to be considered. As with any design process, one must iterate. Tweak a little here, run the program, rinse, repeat. How long are you willing to spend in this process before you feel something is "good enough"? The faster the computer spits the answer out, the more things you can try, and the more you can think things over and hopefully make it better.

    And this is a single component in what can be a fairly complex integrated-photonics chip. [And might I mention again I've been working in 2-D this entire time instead of doing a full 3-D simulation?] You give me the computational power and I'll use it. And I'm an experimentalist doing fairly basic research who just wants to check some stuff in the computer before sinking a lot of time and effort into fabricating a test device.

    On the other hand, I actually don't want to have one of the T100 supercomputers in our lab. That would mean I'd be spending all day writing code and designing complex simulations instead of in the lab getting my hands dirty.

    And as for the commonality of problems requiring such computational power, I think almost any sort of simulation can easily use it. Consider more terms (everything I've done to date is horribly linearized - let's see some more terms in the Taylor expansion) to account for nonlinear behavior, grid things up finer to get more accurate results, consider more possibilities when dealing with chaotic behavior... I would hope any good scientist would find the possibilties endless.
  • by talaphid ( 702911 ) on Wednesday October 27, 2004 @01:42AM (#10639194) Journal

    As I'm RTFA...

    "For instance, on NASA's previous supercomputers, simulations showing five years worth of changes in ocean temperatures and sea levels were taking a year to model. But using a single SGI Altix system, scientists can simulate decades of ocean circulation in just days, while producing simulations in greater detail than ever before. And the time required to assess flight characteristics of an aircraft design, which involves thousands of complex calculations, dropped from years to a single day."

    Being the NASA fanboy I am, I have to wonder if this massive computational step up doesn't share a large number of similiarities between the punch card computing age versus the modern programming age. Because of a quantum leap or five in time reduction for the bottleneck in computation time, more experiments, more radical theories, more wild stuff can be done because it won't be tying up the supercomputer for the next year... just the week. For all the wild science articles that make us salivate here... is this not the harbinger of a new era?

    /fanboy
  • Re:Photos of System (Score:3, Interesting)

    by peterpi ( 585134 ) on Wednesday October 27, 2004 @05:51AM (#10640002)
    On this picture [sgi.com] you can see what I'm sure is an 'Intel Inside' sticker on the bottom of some of the cabinets.
  • by RageEX ( 624517 ) on Wednesday October 27, 2004 @07:51AM (#10640395)
    Yes there's some truth to that. One thing SGI has been guilty of is bad management and wishy-washiness. But it should be pointed out that SGI has been a supporter of OSS for a very very long time and has a been an important contributor not only to the Linux kernel but has also open sourced a lot of their own software. Heck they gave the world XFS for free!

To do nothing is to be nothing.

Working...