Inside The World's Most Advanced Computer 272
Junky191 writes: "Just came across an informational page for the Earth Simulator computer, which provides nice graphics of the layout of the machine and its support structure, as well as details about exactly what types of problems it solves. Fascinating for the engineering problems tackled- how would you organize a 5,120 processor system capable of 40Tflops, and of course don't forget about the 10TB of shared memory." Take note -- donour writes: "well, the new list of supercomputer rankings is up today. I have to say that the Earth Simulator is quite impressive, from both a performance and architectural standpoint."
Earth Simulator? (Score:5, Funny)
Will the Earth Simulator have the nice fjords by Slartibartfast?
whatever the case is... (Score:1)
Re:whatever the case is... (Score:2)
Ahh, but you're forgetting that the Earth wasn't designed to calculate the answer, (Deep Thought had, as you rightly note already told us what that is) it was designed to calculate the question.
Would've worked, too if the pesky Golgafrinchans hadn't turned up and perturbed the calculations. By the time the Vogons demolished it, the algorithms were way out of whack anyway.
(Yeah, I know: -1, Offtopic)
Uhhhhhhhh... (Score:1, Funny)
bold name (Score:1, Interesting)
Re:bold name (Score:2, Funny)
Re:bold name (Score:1)
I'd like to learn how to code for this beast (Score:3, Funny)
base 10 measurement of memory (Score:2, Insightful)
Hmmm.. (Score:4, Funny)
OS'es for the supercomputers... (Score:4, Interesting)
Re:OS'es for the supercomputers... (Score:1)
From the article:
The system uses two operating systems to make the computer both familiar to the user (UNIX) and non-intrusive for the scalable application (Cougar). And it makes use of Commercial Commodity Off The Shelf (CCOTS) technology to maintain affordability.
Hmm, I see one familiar OS in there...
Re:OS'es for the supercomputers... (Score:3, Funny)
Re:OS'es for the supercomputers... (Score:2)
Re:OS'es for the supercomputers... (Score:2)
Re:OS'es for the supercomputers... (Score:1)
Re:OS'es for the supercomputers... (Score:2)
Whenever you see the caret (^) followed by a letter, it typically means CTRL+letter. So ^H would mean that holding CTRL while pressing the H. These are ASCII codes that used to do various things on terminals. In the case of CTRL-H, it would be interpreted as BackSpace. ^G is another common one and causes your BELl to ring. Try it from your command line (even works in Windoze). Look up an ASCII chart for those values less than 32 to see what they do.
Good Ol' Days (Score:2)
Back in the BBS days you could enter [Control-H]'s into a message and they would become part of the actual text. That way the reader could actually see the words appear and be back-spaced over and re-written. It was a cool effect. Of course it worked better at 300-2400 baud, where you could actually see the characters being drawn.
Re:OS'es for the supercomputers... (Score:2)
"Non--system disk or disk error. Replace disk and
"Strike any key to continue . .
Shit! Where's the floppy drive on this thing!
Re:OS'es for the supercomputers... (Score:2)
Keyboard Error - Press any key to continue...
Error 0 - There is no message for this error.
Current Disk is no longer valid!
Anyway, I'll bet the ES runs Plan9 or Hurd.
Re:OS'es for the supercomputers... (Score:3, Informative)
Anyway, it probably S-UX.
Re:OS'es for the supercomputers... (Score:2)
Tru64 is nee Digital Unix nee OSF/1 - a project that came about when DEC, HP and IBM came together to found The Open Software Foundation (OSF) to develop "open" unix (they felt threatened by Sun i think). The OSF released OSF/1, a System V R2 based Unix, which was adopted by DEC as it's new Unix to replace RISC Ultrix.
Re:OS'es for the supercomputers... (Score:2, Informative)
NEC Press release [nec.co.jp] mentions SUPER-UX.
NEC SX-6 page [nec.co.jp] has lots of info.
SUX (Score:1)
Re:SUX (Score:2)
Shortly thereafter, Sony started referring to the libraries as "PetaSite" systems instead. Say "PetaFile" out loud, and you'll understand why.
I'd provide a link, but Sony's web site works properly in, like, no known browsers. Pfeh.
Re:OS'es for the supercomputers... (Score:4, Informative)
The magic in SP is partly hardware (high-speed interconnect between nodes), partly the admin software which allows admin tasks to be run simultaneously of many nodes (a non-negligible task), and is otherwise left up to the application programmers to use MPI or similar to get the application to run over the cluster.
Single system images typically don't scale this large. Cray's UNICOS/mk (Unix variant) is a microkernel version of the UNICOS OS, used on the T3E and it's predecessors, where a microkernel runs on each node, obviously incurring some overhead, but avoiding bottlenecks that otherwise occur as you scale. Here's [cray.com] some info. Last time I checked, T3E scaled to 2048 processors.
Out of the box, SGI's IRIX scales very nicely up to 128-256 processors. Beyond that "IRIX XXL" is used (up to 1024 processors, to date). This is no longer considered to be a general purpose OS!
IRIX replicates kernel text across nodes for speed, and kernel structures are allocated locally wherever possible. But getting write access to global kernel structures (some performance counters, for example) becomes a bottle-neck as the system scales.
IRIX XXL works around these bottle-necks, presumably sacrificing some features in the process. Sorry, I can't find a good link on IRIX scalability.
Re:OS'es for the supercomputers... (Score:2)
Do you have any information to back this up? I don't work for SGI, but I work closely with them, and I've never heard the term "IRIX XXL." I've worked on the 768-processor O3000 system in Eagan, and as far as I noticed it was just running stock IRIX 6.5.14 (at that time).
Then again, I've never used a 1-kiloprocessor system, either. So maybe we're both right.
Re:OS'es for the supercomputers... (Score:2, Informative)
Standard IRIX therefor scales up to 256 processors on O2k and 512 processors with the XXL version. The only difference between the two is that drivers for the former might not work with the later because a few kernel structures changed. The same is true for the Origin3000 versions of IRIX.
Why (Score:1)
Re:Why (Score:1)
Eniac... (Score:3, Interesting)
Re:Eniac... (Score:2, Interesting)
1.5*log(40000/0.5)/log(2) = 24 years.
Re:Eniac... (Score:2)
They're getting around the miniaturization bumps with new manufacturing processes and new materials. Copper technology was the last breakthrough, who knows what the next one will be?
Food for thought... (Score:1)
weird huh?!
the real question is... (Score:4, Funny)
Re:the real question is... (Score:2)
Then again you could write a massively parallel i686 emulator (precalculating 5,120 instructions simultaneously) and run the worlds fastest (and most expensive) PC.
Re:the real question is... (Score:2)
Re:the real question is... (Score:2)
A much more imperssive result would be obtained by talking to www.quantum3d.com and getting them to build you a system based on their ObsidianNV system.
Re:the real question is... (Score:5, Funny)
Re:the real question is... (Score:2)
algorithm development (Score:3, Interesting)
While this does a nice job of crunching numbers, how do they know that their algorithms are any good at doing what they do? Or are they trying to simulate things that aren't continuously kicked around by chaos theory?
I ask because I've been looking at dynamics in my spare time, and simulating something as small as cigarette smoke accurately seems impossible (although I must say Jos Stam and Co did a nice job of making it look real). So it seems a bit bewildering to see something trying to simulate the earth, even if only at a macro level.
Re:algorithm development (Score:2, Insightful)
Ah
Re:algorithm development (Score:1)
The ASCI computers for instance are undoubtedly being validated against the last few nuclear tests.
This is harder than it might sound.
BTW "big things are easier to simulate than small things" (in another reply) is utter crap.
Goes to show you though that using "off-the-shelf" hardware is not necessarily the best thing to do.
Re:algorithm development (Score:1)
one of the purposes they said was "Establishment of simulation technology with 1km resolution."
Probabbly use some old data and see if they can predict a little bit into the future with reasonable accuracy.
as a side note: on the "problem it solves" page, notice how many "seismicity" related items there are (% wise)! i think the true reason they want this thing is to predict earthquakes. i don't blame them; Tokyo is expecting a "big hit", and considering almost 1/2 of the japanese population lives there, probabbly even more than that in terms of $$ in japan -- it's a good idea they are trying to predict these things.
p.s. Japan experiences small quakes almost daily -- most of them cannot be felt; but it means they have tons of data to verify their simulations against.
Re:algorithm development (Score:5, Informative)
Re:algorithm development (Score:3, Insightful)
This is an extremely insightful comment.
What's being suggested here is akin to this - sure, they've got the most powerful car in the world, but to get from LA to New York, you've got to head east. Heading North won't help much, no matter how powerful your car is.
This is what gets me about all these global warming "earth is going to heat up and cool down and rain and drought and..." predictions. How can they be sure they're even in the ballpark?
One variable out, they could throw their predictions out by a massive amount. Their simplifications to allow for the computer to do predictions may not take into account the nuances and subtleties of the real world.
That's why, in many instances, I look at these computers with perhaps more cynicism than most other people. They're great for testing theories, and for allowing scientists to compute algorithms that they possibly otherwise wouldn't be able to do. But just because it's come out of a billion $$$ computer, doesn't mean it's a golden egg.
It's like that old saying that came out when word processors were first invented - shit in, shit out. Just because it's been through a fancy (or expensive) machine it doesn't make the outcome any more valid.
-- james
Re:algorithm development : real world results. (Score:2)
Just because it's been through a fancy (or expensive) machine it doesn't make the outcome any more valid.
Modelling real processes is a science which has been around for as long as computation. Simulations I used to run with Dynamo (discrete simulation of general PDE's) on a minicomputer was in some ways the coolest. It was also the slowest, a 10-state thermal transfer model could take an hour on a $200k processor.
It is quite possible to look at fine-grained results using finite element or finite-difference methods in mechanical and fluid dynamics problems. For instance looking at vortex-shedding is within the realm of possible for a current model PC or workstation.
verification is done against known data-sets and most simulation work involves checks on accuracy.
Yes, problems which are really in the 'butterfly effect' region are very difficult, interesting (useful) work has been done taking such phenomena to the molecular level. For something like crack-propagation finite element methods have to be very detailed indeed to be predictive and while you can use these for useful results, the 'interesting' part needs to be calculated at the atomic level. That, however I have only seen done in simulation of highly regular materials.
Many of the chaotic results happen where there is a delicate ballance in total energy, e.g. the dynamics of cigarette smoke. 'Useful' problems however usually involve substantial energy transfers and at some computational scale these are not chaotic.
Solar and geo-thermal energy input into global weather patterns involves a LOT of energy and modelling is generally easier where you are looking at such problems.
Computational weather prediction has made impressive strides. 10 years ago the ability to predict weather in New England was dismal, today between better sensors and better models the 5-day forcast is now more often correct than not.
Re:algorithm development (Score:2, Interesting)
The accuracy of the simulation can be measured in terms of the length of time that the predictions remained within a given error of the actual weather.
To overcome the problem of inaccurate starting states, high performance computing is now used to run many simulations of the same thing in parallel, each with a slightly different starting state. The hope is to identify many of the "exceptional" outcomes, and assign a probability to that outcome.
A good example of this is the October 1987 storm in the UK, which the Met Office didn't see coming at all. It is believed that had they been able to run many simulations with different starting states, they would have seen that starting conditions slightly different from those used in their simulation would have lead to the craziness [bbc.co.uk] that ensued.
More information about the storm and its cause can be found here [stvincent.ac.uk] or in the Google cache [216.239.39.100].
Nothing New (Score:2, Funny)
We already have that: http://whatisthematrix.warnerbros.com/
Earthquakes shmirthquakes. (Score:5, Funny)
"Advantages" of ES (Score:5, Interesting)
Another advantage would be that since ASCI White is a hyper RS6K, you could use a lower end model (and IBM could rather inexpensively offer a lower end model) to develop your models on before using the relatively expensive big boy to do the actual simulations. I have to admit that this point is moot if they don't keep the utilization of the thing up pretty high most of the time.
Though they mention that ES "only needs 5104" processors vs 8192 for AW, it looks like ES still takes up massive amounts of space. Now ES' storage is significantly larger that AW, so maybe that's where all the space is being eaten, but it would be interesting to see what the actual cabinet space/power requirements for the two machines sans storage are (assuming they are both using standard stuff for storage).
Others things include since AW is based on OTS parts, is it easier to get parts for when processing units konk out. Is it simpler for a tech to work on the unit. Since Linux is already running on RS6K, theoretically with a few device drivers, you could run Linux on that bad boy
Of course all this is moot in the non-real-world of supercomputers. With seemingly infinite budgets, the only _real_ measure is absolute performance, and ES obviously has the edge here. But if I were the IBM sales rep for supercomputing, I'd sure be hyping the fact that when it's not simulating nuclear explosions, you can run Gimp and Mozilla.
Re:"Advantages" of ES (Score:1)
Wow... -- somebody else who knows about the power4...
I am cuious, do you know how they USE the darn things? i mean, the sucker has over 5,000 pins (!!), i suppose the thermal requirements are tremendous too. any info would be appriciated.
But if I were the IBM sales rep for supercomputing, I'd sure be hyping the fact that when it's not simulating nuclear explosions, you can run Gimp and Mozilla.
don't forget to mention the terrific pr0n potential.
Re:"Advantages" of ES (Score:4, Informative)
I'm not sure how much you've looked up, so some of this information may be redundant, but here's what I've been able to dig up:
That's a beast of a chip! The packaging looks pretty substantial as well. I don't doubt the cooling systems are fairly remarkable, although I can't find any specific information about 'em.
cheers!
Re:"Advantages" of ES (Score:1)
by the way (Score:1)
a small difference... heh
Re:by the way (Score:2)
Now ES' storage is significantly larger that AW
Re:"Advantages" of ES (Score:3, Informative)
You can get an SX-6i [supercomputingonline.com] . The processor in ES is not made only for ES. And I don't think you would sell many supercomputers for IBM if you were advocating Gimp and Mozilla as applications...
Re:"Advantages" of ES (Score:2)
True, but if you look at the Top1000 list [top500.org], you'll see significantly more IBM machines across the board then NEC, including a large number of "standard" units (sold as kick ass RS/6000's vs "supercomputers", e.g. the P690 [ibm.com]
I would think that this gives them a signficant edge in development costs as well as giving their customers more flexibility.
And I don't think you would sell many supercomputers for IBM if you were advocating Gimp and Mozilla as applications
Oh come on, nuclear physicists like to clean up photos of their dogs (probably don't have girlfriends) and surf the web just like anyone else
Re:"Advantages" of ES (Score:2, Insightful)
> across the board then NEC
As widely known and publicly acknowledged also by one of the authors of that list (Prof. Meuer), the linpack benchmark used to build the top500 list is biased against vector supercomputers, like NEC's.
Supercomputer performance cannot be measured by a single number, really.
Re:"Advantages" of ES (Score:2, Informative)
Re:"Advantages" of ES (Score:2)
People, people, this was a joke. You know, not intended to be taken seriously. Of course if someone is going to spend 10 figures on a computer, they don't give a flip about Gimp, etc. Chill, it's ok, put the Pepsi down.
Can you imagine... (Score:5, Funny)
Ahh, it may be all powerful... (Score:1)
I clicked on the link (Score:1)
-- james
ps humour, not troll/flamebait
"I predict (Score:3, Funny)
That's nothing! Check out the Playstation 3! (Score:1)
http://www.geocities.com/s178.rm/index2.html [geocities.com]
Unfortunately... (Score:2)
Re:Unfortunately... (Score:3, Funny)
Well, it would be kinda ironic if it got knocked out by an earthquake. Especially if it didn't predict it.
Regards, Ralph.
alright now. (Score:2, Insightful)
SETI? (Score:2, Interesting)
Probably much greater (Score:2)
However the SETI network could never do what the ES does because although it is compute-distributed, the data is centralized, so the actual compute rate is limited by communication speed. And over the Internet that's really slow compared to real interconnect architectures for these sorts of applications. At least, until the Internet can compete with a multi-gigabyte-per-second local interconnect. Of course by then, the processors will still be outstripping the network, so you probably still wouldn't be able to do it.
Topic (Score:2, Funny)
How did they get inside my brain???
Earth Simulator OS + German TV (Score:5, Informative)
The Earth Simulator is running Super UX. The same operating system as the rest of the NEC supercomputers [nec.co.jp]
The German Language TV channel 3sat [3sat.com] will broadcast a 30 min film on Earth Simulator on Monday and 24th of June at 21:30 hours and on Tuesday, 25th of June at 14:30 hours.
NSA (Score:1)
You can get your butt that if information about the "Worlds Fastest Supercomputer" is available to the general public, the NSA has got something bigger and better.
Re:NSA (Score:2)
Uh ... no german university in the top500 ... (Score:2, Interesting)
Although they are setting up a quite cool Sun Fire Ultra Sparc Cluster [rwth-aachen.de] running Solaris.
The setup will consist of 16 Sun Fire 6800 SMP nodes (1500 MHz, each node is a 24 processor SMP system with 24 GB shared main memory) and 4 Sun Fire 15K SMP nodes (1500 MHz, each having 72 processors and 144 GB of memain memory) giving an max. arithmetic performance of 4 TFlop/s.
Check the link to see for yourself (like you dont have anything better to do, right?).
Sad/funny part of the story: the cluster is going to be finished in 2003 ...
I should check Moores law on top 500 super computers...
Alt least know the world knows we do cool stuff too ...
It runs mainly FORTRAN? (Score:1)
SimEarth, SimCity and Sims (Score:1)
Wonder if EarthSimulator could run a version of SimEarth, down to every country,state,city,person,etc. Doesn't the Sim series also throw in weather events? Tornados, etc.? EarthSimulator should be able to crank out a few of those...
/.'ed (Score:2, Funny)
I guess top500.org isn't running on one of them.
My old computer was more advanced. (Score:1, Funny)
It didn't fry.
Beat that, Earth Simulator! Beat that!
(I still use the now-slightly sticky soundcard from it
Divine intervention? (Score:1)
stupid reference right below this.... (Score:3, Funny)
Hmmm, now where have I heard of an idea like that? [whatisthematrix.com]
"Inside" the Earth Simulator (Score:4, Interesting)
Re:"Inside" the Earth Simulator (Score:2)
A better view of the layout is shown on the bottom right of this [jamstec.go.jp] page, the purpose of the color coding appears to be for aesthetics. You know, for when godzilla rips the roof off the building.
What about the REAL fastest computers? (Score:2)
-
Wow (Score:2, Funny)
Some nostalgia
wow... (Score:1)
(The article has some nice graphics too!)
And because I KNOW there's probably 2000 "A beowulf cluster of those..." posts that are below my threshold: I would fear for the safety of mankind if someone made a cluster of
supercomputer = greatert than teraflop (Score:2)
About half a human brain worh of processing power (Score:2, Interesting)
What they are really trying to do... (Score:3, Funny)
Maxis Introduces......SimEarth XP!!!! (Score:2, Funny)
SimEarth XP
System Requirements:
40Tflop 5120 Processor Cluster
10TB of System Memory
256 Color Display
4X Cdrom Drive
Arctic Rated Parka
"Sales thus far have been slow..." confessed Wright, "...however we're expecting at least one large customer in the coming months."
-Chris
from the preview-of-2026-ipaq dept. (Score:2)
--pi
... Erm, sorry. That's 3760000000. They release too many of these things. And, it only costs 1/8 of it's model number, like the rest do!
Re:the meaning of life (Score:1)
b) Just because something is a [fictional] supercomputer does not preclude the existance of another, superer computer simulating it.
Re:Where would the SETI project rank? (Score:2, Informative)
"The most powerful computer, IBM's ASCI White, is rated at 12 TeraFLOPS and costs $110 million. SETI@home currently gets about 15 TeraFLOPs and has cost $500K so far"
Earth Simulator Project Total peak performance: 40 Tera FLOPS.
Of course the systems' architecture is different so using the speed to evaluate processing power is difficult.
There's a TFlop chart on the earth development button.
Simulating the Earth down to square kilometers will be impressive.
Re:Apples and Oranges (Score:3, Interesting)
Do supercomputing manufacturers cheat on benchmarks? I don't know. Presumably it would be a rather expensive proposition-- and since supercomputing sites will benchmark with a variety of specialized and general purpose libraries, it seems unlikely to work.
There, are, of course, differences between weather simulations and galactic evolution simulations. But field specific benchmarks are inappropriate for a site like Top500--the whole point of the site is to allow someone to analyse gross trends. "This memory architecture once dominated the rankings--now its used by only a few entries. Perhaps our next computer platform shouldn't be based on that architecture." (and possibly writing journal articles about it.)
In addition, general purpose supercomputing sites are relatively common.
Re:Apples and Oranges (Score:2)
Almost, but not quite. The Top500 rankings are based on solving the dense system of linear equations generated by the LinPack benchmark driver. Using LinPack is optional. For that matter, using Gaussian elimination with partial pivoting is optional, as long as you meet the error bound of O(n \epsilon).
Re:it's 2.45 in the morning (Score:1)
Re:Not a Beowulf cluster comment, but... (Score:4, Interesting)
Earth Simulator: 35.86TFlops.sec (according to Top100 list)
Seti@Home network: 37.07TFlops/sec (over last 24hr., according to the site).
Just because it is an incredibly powerful machine doesn't mean it has the distributed computing projects beat.
Re:Not a Beowulf cluster comment, but... (Score:2, Insightful)
Re:Not a Beowulf cluster comment, but... (Score:2, Insightful)
-Leperflesh