SGI & NASA Build World's Fastest Supercomputer 417
GarethSwan writes "SGI and NASA have just rolled-out the new world number one fastest supercomputer. Its performance test (LINPACK) result of 42.7 teraflops easily outclasses the previous mark set by Japan's Earth Simulator of 35.86 teraflops AND that set by IBM's new BlueGene/L experiment of 36.01 teraflops. What's even more awesome is that each of the 20 512-processor systems run a single Linux image, AND Columbia was installed in only 15 weeks. Imagine having your own 20-machine cluster?"
Re:its not the hardware thats important (Score:3, Interesting)
But if you can decrease the grid size by throwing more teraflops at the problem, maybe we'll find that our models are accurate after all?
Re:its not the hardware thats important (Score:3, Interesting)
Cost (Score:5, Interesting)
For example, I know the Virginia Tech cluster (1,100 Apple Xserve G5 dual 2.3Ghz boxes) cost just under $6 million, runs at a bit over 12 teraflops, so it gets a bit over 2 teraflops per million dollars.
Other high-ranking clusters would be interesting to evaluate in terms of teraflops per million dollars, if anyone knows any.
70.93 TeraFLOPs (Score:5, Interesting)
Read on to the next paragraph (Score:5, Interesting)
Ok, so we have Linux doing tens of teraflops in processing, FreeBSD doing tens of petabits in networking,
Re:20? Try 10420,no 2560, make it 20 after all. (Score:4, Interesting)
Re:What is the stumbling block? (Score:3, Interesting)
It doesn't. [rocksclusters.org]
Linux #1 (Score:5, Interesting)
Processors aren't relevant anymore? (Score:5, Interesting)
There was a time when different computers ran on different processors, and supported different OSes. Now what's happening? Itanic and Opteron running Linux seem to be the only growth players in the market; and the supercomputer world is completely dominated by throwing more processors together. Is there no room for substantial architectural changes? Have we hit the merging point of different designs?
Just some questions. Although it's not easy, I'm less excited by a supercomputer with 10k processors than I would be by one containing as few as 64.
Re:Read on to the next paragraph (Score:3, Interesting)
Mmmm, home consumer usage, maybe?? HA! What was I thinking!?
Re:Photos of System (Score:5, Interesting)
I don't have a square footage number, but it's the overwhelming majority of the server floor. We had to "clear the floor" earlier this summer to make room.
Re:Ok, what is the point of this? (Score:5, Interesting)
Slightly more concrete example - right now with my photonics simulations (finite element) on my dual-opteron rig the max I can handle is about 180,000 elements (which means a (4*180000)x(4*180000) matrix with complex elements needs to be diagonalized, among other things), and it takes about half an hour for a standing-wave calculation. To do any time propogation, repeat same calculation in picosecond increments. And with the gridding I can do, for a 100 micron disc resonator in 2-D I have to use light at about 40 microns. To go to the 320nm wavelength these resonators are operating at, I'd need roughly 2 orders of magnitude more memory. There's also the time factor to be considered. As with any design process, one must iterate. Tweak a little here, run the program, rinse, repeat. How long are you willing to spend in this process before you feel something is "good enough"? The faster the computer spits the answer out, the more things you can try, and the more you can think things over and hopefully make it better.
And this is a single component in what can be a fairly complex integrated-photonics chip. [And might I mention again I've been working in 2-D this entire time instead of doing a full 3-D simulation?] You give me the computational power and I'll use it. And I'm an experimentalist doing fairly basic research who just wants to check some stuff in the computer before sinking a lot of time and effort into fabricating a test device.
On the other hand, I actually don't want to have one of the T100 supercomputers in our lab. That would mean I'd be spending all day writing code and designing complex simulations instead of in the lab getting my hands dirty.
And as for the commonality of problems requiring such computational power, I think almost any sort of simulation can easily use it. Consider more terms (everything I've done to date is horribly linearized - let's see some more terms in the Taylor expansion) to account for nonlinear behavior, grid things up finer to get more accurate results, consider more possibilities when dealing with chaotic behavior... I would hope any good scientist would find the possibilties endless.
What's the point, I ask myself (Score:2, Interesting)
As I'm RTFA...
"For instance, on NASA's previous supercomputers, simulations showing five years worth of changes in ocean temperatures and sea levels were taking a year to model. But using a single SGI Altix system, scientists can simulate decades of ocean circulation in just days, while producing simulations in greater detail than ever before. And the time required to assess flight characteristics of an aircraft design, which involves thousands of complex calculations, dropped from years to a single day."
Being the NASA fanboy I am, I have to wonder if this massive computational step up doesn't share a large number of similiarities between the punch card computing age versus the modern programming age. Because of a quantum leap or five in time reduction for the bottleneck in computation time, more experiments, more radical theories, more wild stuff can be done because it won't be tying up the supercomputer for the next year... just the week. For all the wild science articles that make us salivate here... is this not the harbinger of a new era?
Re:Photos of System (Score:3, Interesting)
Re:Read on to the next paragraph (Score:3, Interesting)