Two Directions for the Future of Supercomputing 148
aarondsouza writes: "The NY Times (registration required, mumble... mutter...) has this story on two different directions being taken in the supercomputing community. The Los Alamos labs have a couple of new toys. One built for raw numbercrunching speed, and the other for efficiency. The article has interesting numbers on the performance/price (price in the power consumption and maintenance sense) ratios for the two machines. As an aside... 'Deep Blue', 'Green Blade' ... wonder what Google Sets would think of that..."
Gridcomputing sites (Score:2)
Re:Gridcomputing sites (Score:4, Informative)
Latency issues are still going to be there and which would make Grid environments unsuitable for the majority of simulations. You can't do nuclear event simulations effectively if you have a multiple second delay in communicating between processors which you get in Grids.
On the other hand Grids do have several advantages in terms of providing similar TFLOPS for a much lower price, by using several geographically seperated systems you give access to more researchers and research in this area has a lot of practical spin offs in the future.
Re:Gridcomputing sites (Score:2)
So are you talking rubbish as well? Because you seem to share the completely same opinion as I do.
Re:Gridcomputing sites (Score:2)
Probably 8*), no offense taken and I hope, none given!
However this is a problem, people sometimes don't realise that there are different types of supercomputing problem which need to be approached in different ways. (I assume you do but some posters may not).
There are some problems, such as Seti@Home [berkeley.edu] which are suitable for computation in a widely distributed environment. Each SETI Unit doesn't rely on any other to be analysed so it doesn't matter if it takes a long time to communicate between processors (if at all). Others, such as weather simulations require high speed, high bandwidth communication between each processor. In these cases even a Beowulf cluster is going to have far too little bandwith to be useful. This was the point I was trying to put accross. Grids are great for many things, however I'd still want a supercomputer for some problems.
I see it as a parallel evolution between the two methods. Supercomputers to give us the hardware, Grid systems to provide the software & make use of the hardware when it becomes affordable.
Interesting.... (Score:1)
On a work per dollar bassis, which one actually delivers more??
(screw the NY times, I'll glean the data from re-posts) (and screw all of you first-post weenies.... nothing but garbage 'tween your ears)
Re:Interesting.... (Score:1)
Re:Interesting.... (Score:1)
How do these megawatt sucking super-machines compare to distributed computing??
Compare, say, the folding@home project, and deep blue... Which has done / does more work/calculations per hour?
How long would it take deep blue to take the #1 spot on the folding@home rankings??
and how great would the potential benefits be, to the human race as a whole, if these monster computers did some folding in their idle time???
Re:Interesting.... (Score:3, Insightful)
Re:Interesting.... (Score:1)
I agree, it would be really interesting to have an interview with some supercomputer guys.
Re:Interesting.... (Score:1)
Re:standard questions (Score:1)
May we imagine a Beowulf cluster of these?
From the article:
Green destiny belongs to a class of makeshift supercomputers called Beowulf Clusters.
A beowulf cluster of beuwulf clusters? We'll I sertainlu could imagine one (nothing wrong with my imagination at least) but what would be the point
Re:standard questions (Score:1)
suprA ?
If you mean the next step after a supercomputer that should be a hypercomputer
But you know hypercomputers are all hype, I'll take a trusty old supercomputer instead any day.
Re:standard questions (Score:1)
Full article w/o registration (Score:1, Informative)
By GEORGE JOHNSON
Moore's Law holds that the number of transistors on a microprocessor -- the brain of a modern computer -- doubles about every 18 months, causing the speed of its calculations to soar. But there is a downside to this oft-repeated tale of technological progress: the heat produced by the chip also increases exponentially, threatening a self-inflicted meltdown.
A computer owner in Britain recently dramatized the effect by propping a makeshift dish of aluminum foil above the chip inside his PC and frying an egg for breakfast. (The feat -- cooking time 11 minutes -- was reported in The Register, a British computer industry publication.) By 2010, scientists predict, a single chip may hold more than a billion transistors, shedding 1,000 watts of thermal energy -- far more heat per square inch than a nuclear reactor.
The comparison seems particularly apt at Los Alamos National Laboratory in northern New Mexico, which has two powerful new computers, Q and Green Destiny. Both achieve high calculating speeds by yoking together webs of commercially available processors. But while the energy-voracious Q was designed to be as fast as possible, Green Destiny was built for efficiency. Side by side, they exemplify two very different visions of the future of supercomputing.
Los Alamos showed off the machines last month at a ceremony introducing the laboratory's Nicholas C. Metropolis Center for Modeling and Simulation. Named for a pioneering mathematician in the Manhattan Project, the three-story, 303,000-square-foot structure was built to house Q, which will be one of the world's two largest computers (the other is in Japan). Visitors approaching the imposing structure might mistake it for a power generating plant, its row of cooling towers spewing the heat of computation into the sky.
Supercomputing is an energy-intensive process, and Q (the name is meant to evoke both the dimension-hopping Star Trek alien and the gadget-making wizard in the James Bond thrillers) is rated at 30 teraops, meaning that it can perform as many as 30 trillion calculations a second. (The measure of choice used to be the teraflop, for "trillion floating-point operations," but no one wants to think of a supercomputer as flopping trillions of times a second.)
Armed with all this computing power, Q's keepers plan to take on what for the Energy Department, anyway, is the Holy Grail of supercomputing: a full-scale, three-dimensional simulation of the physics involved in a nuclear explosion.
"Obviously with the various treaties and rules and regulations, we can't set one of these off anymore," said Chris Kemper, deputy leader of the laboratory's computing, communications and networking division. "In the past we could test in Nevada and see if theory matched reality. Now we have do to it with simulations."
While decidedly more benign than a real explosion, Q's artificial blasts -- described as testing "in silico" -- have their own environmental impact. When fully up and running later this year, the computer, which will occupy half an acre of floor space, will draw three megawatts of electricity. Two more megawatts will be consumed by its cooling system. Together, that is enough to provide energy for 5,000 homes.
And that is just the beginning. Next in line for Los Alamos is a 100-teraops machine. To satisfy its needs, the Metropolis center can be upgraded to provide as much as 30 megawatts -- enough to power a small city.
That is where Green Destiny comes in. While Q was attracting most of the attention, researchers from a project called Supercomputing in Small Spaces gathered nearby in a cramped, stuffy warehouse to show off their own machine -- a compact, energy-efficient computer whose processors do not even require a cooling fan.
With a name that sounds like an air freshener or an environmental group (actually it's taken from the mighty sword in "Crouching Tiger, Hidden Dragon"), Green Destiny measures about two by three feet and stands six and a half feet high, the size of a refrigerator.
Capable of a mere 160 gigaops (billions of operations a second), the machine is no match for Q. But in computational bang for the buck, Green Destiny wins hands down. Though Q will be almost 200 times as fast, it will cost 640 times as much -- $215 million, compared with $335,000 for Green Destiny. And that does not count housing expenses -- the $93 million Metropolis center that provides the temperature-controlled, dust-free environment Q demands.
Green Destiny is not so picky. It hums away contentedly next to piles of cardboard boxes and computer parts. More important, while Q and its cooling system will consume five megawatts of electrical power, Green Destiny draws just a thousandth of that -- five kilowatts. Even if it were expanded, as it theoretically could be, to make a 30-teraops machine (picture a hotel meeting room crammed full of refrigerators), it would still draw only about a megawatt.
"Bigger and faster machines simply aren't good enough anymore," said Dr. Wu-Chung Feng, the leader of the project. The time has come, he said, to question the doctrine of "performance at any cost."
The issue is not just ecological. The more power a computer consumes, the hotter it gets. Raise the operating temperature 18 degrees Fahrenheit, Dr. Feng said, and the reliability is cut in half. Pushing the extremes of calculational speed, Q is expected to run in sprints for just a few hours before it requires rebooting. A smaller version of Green Destiny, called Metablade, has been operating in the warehouse since last fall, requiring no special attention.
"There are two paths now for supercomputing," Dr. Feng said. "While technically feasible, following Moore's Law may be the wrong way to go with respect to reliability, efficiency of power use and efficiency of space. We're not saying this is a replacement for a machine like Q but that we need to look in this direction."
The heat problem is nothing new. In taking computation to the limit, scientists constantly consider the trade-off between speed and efficiency. I.B.M.'s Blue Gene project, for example, is working on energy-efficient supercomputers to run simulations in molecular biology and other sciences.
"All of us who are in this game are busy learning how to run these big machines," said Dr. Mike Levine, a scientific director at the Pittsburgh Supercomputing Center and a physics professor at Carnegie Mellon University. A project like Green Destiny is "a good way to get people's attention," he said, "but it is only the first step in solving the problem."
Green Destiny belongs to a class of makeshift supercomputers called Beowulf clusters. Named for the monster-slaying hero in the eighth-century Old English epic, the machines are made by stringing together off-the-shelf PC's into networks, generally communicating via Ethernet -- the same technology used in home and office networking. What results is supercomputing for the masses -- or, in any case, for those whose operating budgets are in the range of tens or hundreds of thousands of dollars rather than the hundreds of millions required for Q.
Dr. Feng's team, which also includes Dr. Michael S. Warren and Eric H. Weigle, began with a similar approach. But while traditional Beowulfs are built from Pentium chips and other ordinary processors, Green Destiny uses a special low-power variety intended for laptop computers.
A chip's computing power is ordinarily derived from complex circuits packed with millions of invisibly tiny transistors. The simpler Transmeta chips eliminate much of this energy-demanding hardware by performing important functions using software instead -- instructions coded in the chip's memory. Each chip is mounted along with other components on a small chassis, called a blade. Stack the blades into a tower and you have a Bladed Beowulf, in which the focus is on efficiency rather than raw unadulterated power.
The method has its limitations. A computer's power depends not just on the speed of its processors but on how fast they can cooperate with one another. Linked by high-speed fiber-optical cable, Q's many subsections, or nodes, exchange data at a rate as high as 6.3 gigabits a second. Green Destiny's nodes are limited to 100-megabit Ethernet.
The tightly knit communication used by Q is crucial for the intense computations involved in modeling nuclear tests. A weapons simulation recently run on the Accelerated Strategic Computing Initiative's ASCI White supercomputer at Lawrence Livermore National Laboratory in California took four months of continuous calculating time -- the equivalent of operating a high-end personal computer 24 hours a day for more than 750 years.
Dr. Feng has looked into upgrading Green Destiny to gigabit Ethernet, which seems destined to become the marketplace standard. But with current technology that would require more energy consumption, erasing the machine's primary advantage.
For now, a more direct competitor may be the traditional Beowulfs with their clusters of higher-powered chips. Though they are cheaper and faster, they consume more energy, take up more space, and are more prone to failure. In the long run, Dr. Feng suggests, an efficient machine like Green Destiny might actually perform longer chains of sustained calculations.
At some point, in any case, the current style of supercomputing is bound to falter, succumbing to its own heat. Then, Dr. Feng hopes, something like the Bladed Beowulfs may serve as "the foundation for the supercomputer of 2010."
Meanwhile, the computational arms race shows no signs of slowing down. Half of the computing floor at the Metropolis Center has been left empty for expansion. And ground was broken this spring at Lawrence Livermore for a new Terascale Simulation Facility. It is designed to hold two 100-teraops machines.
Re:Full article w/o registration (Score:1)
Ta,
Tom
Green Destiny looks like an RLX cluster (Score:1)
-ez
Re:Green Destiny looks like an RLX cluster (Score:1, Informative)
Re:Green Destiny looks like an RLX cluster (Score:3, Informative)
all i see (Score:3, Informative)
though i did find the line about Q needing rebooted every few hours kinda funny, i mean when are they gonna learn to stop installing Windows on a 100 million dollar supercomouter
Re:all i see (Score:1)
Re:all i see (Score:2)
Of course, that would imply that there was always a machine coming out of reboot...
More seriously, how could one figure out the optimum strength/speed/Flops/? for a node if one were building a cluster of identical machines? It seems clear that up to some size it's better to stick everything in the same box, and beyond that it makes more sense to cluster them. But how would one decide where to draw the line? (I was considering this as an technical problem, for the moment ignoring costs [which would often just say: Grab a bunch of used machines and slap them together].)
Re:all i see (Score:1)
Most applications which run on the SP computers get 5-10%. Unless they give up their elliptic equations, their I/O,
Re:all i see (Score:1)
Most applications which run on the SP computers get 5-10%. Unless they give up their elliptic equations, their I/O, ...., I can't see someone getting close to 50%.
That's why I wrote "if you are lucky" :) As for the Earth Simulator, real life applications
(when properly vectorized) running on SX-6 which
is basically the same technology, achieve
routinely above 80%, in same cases up to 90% of
peak performance. There are two reasons for this:
first, SX series, unlike other vector processors,
doesn't need particularly long vectors - it performs well already with short vectors. The other reason is extremely high memory bandwidth.
I have an idea (Score:2, Redundant)
Re:I have an idea (Score:3, Insightful)
I'm sure it's legal. It's like sharing/swapping discount passes at the supermarket.
Re:I have an idea (Score:2)
Standard logins (Score:1)
Re:I have an idea (Score:1)
You'll need to un-slashmangle the URL above.
Google set reply (Score:2)
(my emphasis)
Re:Google set reply - OT (Score:3, Funny)
Never having seen Google Sets before, I typed in:
Cmdr Taco
Hemos
It expanded it to:
Predicted Items
Hemos
Cmdr Taco
The Andover brunette
The blonde masseuse
CmdrTaco
Mel Gibson
Martha Stewart
The me redhead
Purple Bikini Girl
I'd love to know how it came up with those results...
Cheers,
Jim in Tokyo
Re:Google set reply - OT (Score:2, Informative)
A little bit of research shows up this [google.com] on how google sets works. There's a link [peterme.com] on the bottom of that message for an introduction to faceted sets.
And now for the fun bit. Looking for set with just the keyword Porn [google.com], I got some very interesting results:
Predicted ItemsPorn
Warez Sites
pirated software
Irc Bots
Mp3
Spamming Software
Re:Google set reply - OT (Score:1)
Except for that last one... >:-(
Re:Google set reply - OT (Score:1)
I never would have thought that all those things could be so easily related... but there you have it!
Re:Google set reply - OT (Score:1)
mubme, mutter ? (Score:5, Funny)
next time you put "registration" between brackets, followed by two words, you better make sure that those two words are userID and paswd !
I really wonder what the NYT logfile-monkeys think when they see a zillion 'mumble/mutter' login attempts...
Re:mubme, mutter ? (Score:4, Funny)
> "The NY Times (registration required, mumble... mutter..."
I really wonder what the NYT logfile-monkeys think when they see a zillion 'mumble/mutter' login attempts...
Well, they'll probably be wondering why the user I (genuinely) just registered - userid = 'mumble...', password = 'mutter...' - is logging in from so many damn IP's
~whm
mutter, mumble (Score:1)
U:mubme P:mutter (Score:1, Offtopic)
Re:mubme, mutter ? (Score:1)
a billion tansistors and a 1kwatt (Score:1)
Re:a billion tansistors and a 1kwatt (Score:1)
From the article: By 2010, scientists predict, a single chip may hold more than a billion transistors, shedding 1,000 watts of thermal energy -- far more heat per square inch than a nuclear reactor.
Power consumption and heat dissipation per unit area are two different things.
Re:a billion tansistors and a 1kwatt (Score:2)
Re:a billion tansistors and a 1kwatt (Score:1)
Shows how smart they are, Watts are a unit or effect not energy. DUH!
...You Asked For It... (Score:2)
Predicted Items
Deep Blue
Stand Away
Solitaer
Floor planing
Master Mind
Reaching Horizons
Freedom Call
DEEP RED
etc
Queen Of The Night
Painkiller
Today's Technology
Recent developments in AI
His literary influences
Angels Cry
Never Understand
Red
After rain
The Renju International Federation
Game of Go Ring
Gateway Inc
Dell Computer Corp
IBM
Carry On
The future of AI
Food Chain Fish
Deep Yellow
Violet
ZITO
Forest Green
Re:...You Asked For It... (Score:2)
It looks like the discography from Brazilian progressive rock band Angra. [artistdirect.com] At least nine of those items are the titles to some of their songs.
here i what google has to say... (Score:1)
It all comes back to energy.... (Score:4, Interesting)
How many people can hold the handle that turns the crank? Or in modern terms, how much juice can you reasonably throw at these beautiful monsters!?
So with this in mind, I don't think it's too off-topic to mention this article [sfgate.com] which talks about the gutting of funding for fuel cells. Or this student research paper site [umich.edu] which talks about the inherent economy of different sources of energy in various terms. (Warning! They are pro-nuclear, so YMMV!) Also, if you are interested in where this topic takes you you should stop off here [doe.gov] to follow up on whatever takes your fancy as far as energy production goes. They've got a veritable mountain of info. Check out their hydrogen economy [nrel.gov] stuff.
Whoever thought up the names of the two machines needs to get a grant or something! Green Destiny, mmmmmmm! Q, grooowwwl!
"registration required..mumble...mutter" (Score:3)
Re:"registration required..mumble...mutter" (Score:1)
Re:"registration required..mumble...mutter" (Score:2)
All the news that the NYTimes finds fit to printOT (Score:1)
But since it didn't have an article from the Gray Lady associated with it, the editors passed.
I read the Times on the train, I don't have to read it again. The WWW is a big place, and the whole point of
2002-06-10 16:52:06 Green Destiny: Los Alamos Beowulf Cluster, Low Upk (articles,news) (rejected)
http://sss.lanl.gov/vance.shtml
SupahComputers (Score:1)
Well, there has always been need for differnt
supercomputers. There is numbercrunchers,
vector machines etc. You have to get the
machine that suites your needs. Thats all.
Like som machines need HUGE datasets to work
on but its not a complex calucation eatch
cycle. But some other are very complex
calucations on small datasets (where a beowulf
works wonders). But you cant use a
linuxkluster THAT efficent if you have to move
around several hundred gigabyte of data around
the nodes.
On a sidenote. Why is "blah blah ny times had
registration we know it" informative? I say
offtopic!
These are the 2 different directions? NOT! (Score:2, Insightful)
The Q machine is a big cluster of Alpha servers with some kind of fast interconnect. The Green Destiny cluster seems to be a Transmeta blade cluster with some kind of commodity interconnect. Both are basically big collections of independent CPU's talking over some kind of fast network connection.
They are distributed memory clusters, each machine has it's own memory and they interact through a fast network.
There are other architectures where you have all the processors sharing the same memory, and they communicate over the memory bus. Kind of like the difference between 16 PC's talking over gigabit ethernet, and a 16 processor Sun box.
At another level there is the whole vector vs. scalar architecture. The japanese have a 36 teraflop vector supercomputer that leaves our machines in the dust.
The article is misleading because the machines described are at different ends of a price spectrum, not at differents ends of an architectural spectrum. You aren't looking at different approaches, you're just looking at different price points.
Just imagine... (Score:2, Funny)
Moores Law (Score:2, Informative)
This is a myth for the non techie, it's transistor density that doubles every 18 months, not the number of transistors.
Re:Moores Law (Score:1, Flamebait)
Re:Moores Law (Score:1)
visit interesting places then blow them up. (Score:2, Insightful)
"Armed with all this computing power, Q's keepers plan to take on what for the Energy Department, anyway, is the Holy Grail of supercomputing: a full-scale, three-dimensional simulation of the physics involved in a nuclear explosion."
Come on, for pete's sake, can't we do better stuff with this than simulate the physics in a nuclear explosion? Honestly, who cares about that. We all know it's bad, real bad. Move on to something near and dear to a lot of us out there...cancer, AIDS, Heart Disease,etc.
This is probably an unpopular view, but damn, enough destruction already.
Re:visit interesting places then blow them up. (Score:3, Insightful)
From what I've read, a real useful advance in computational biology would be to automate building and refining of protein structures from crystallography. It's just not as sexy, though.
Re:visit interesting places then blow them up. (Score:2)
> involve playing with huge computers.
Sure they do. It's called "rational drug design".
I spent a year of my life modelling networks of
heart cells at an electrochemically detailed level.
That line of work, if it was funded and pursued,
could incrementally resolve the various causes of
heart failure, and provide a wide array of mech-
anisms to detect, prevent, or treat the various
forms of arrythmia. It requires a lot of
computing power to similuate the behaviour of
networks of billions of cells at a fine scale.
Re:visit interesting places then blow them up. (Score:1)
As to cancer and HIV/AIDS, I don't see how you can put them in even the same ballpark. HIV is linked very, very strongly to easily-avoided behavior -- the use of prostitutes, promiscuity, needle exchange et al. Most cancers victims (sun-related melanomas excepted) are far more innocent of their ailments.
Why simulate nuclear explosions? (Score:1)
The reason there are no fusion power plants is because we don't know how to handle the vast amounts of energy released in a fusion reaction. If it can be modeled and studied, maybe that energy source can become practical.
Re:Why simulate nuclear explosions? (Score:2)
The current interest in simulating nuclear explosions occurs simultaneously with Bush declaring that tactical nukes might be a good idea, and exploring techniques for using them to destroy hardened underground bunkers (see this month's Scientific American). So I think that's the most likely justifier of the current interest.
Re:Why simulate nuclear explosions? (Score:1)
Re:Why simulate nuclear explosions? (Score:2)
Re:Why no fusion power (Score:2)
Research fusion reactors exist, but they don't produce net power, rather they consume it. That is why they are still research.
This is pretty much unrelated to the problem of simulating fusion bombs, which uses a different fuel (for the final fusion stage, typically lithium-6 deuteride), the ignition of which involves a whole series of reactions between a large number of different materials, initiated by the detonation of a fission bomb boosted with deuterium or tritium.
Fusion power plants typically use plasmas, not solids, for their fuel, and are ignited by confinement and heating. The amount of energy released is pretty much self-regulating, since the plasma will tend to lose confinement and burn less if it gets too hot.
Power generation and weapons have very different design goals: power plants tend to be big, stay still, and produce large amounts of power for a long time, connected to a power transmission system, with human operators nearby. Weapons need to be moved quickly to a target (i.e. be light, compact, and robust), and generate a huge amount of power in a very brief time, with care taken to preserve the safety of the weapon's handlers, and not much done to preserve the safety of any people at the target.
(Of course, the two fields are not totally separate; however, my main point is that they typically involve very different computer simulations).
Re:visit interesting places then blow them up. (Score:2)
If there was no nuclear stockpile to maintain, Los Alamos probably would not be host to the "world's fastest supercomputers." Not exactly a fair trade...
Analysis of differences (Score:4, Insightful)
This means that the power efficiency difference is just a mere factor of 5. The problem with supercomputing is of course scaling and interconnecting the cpu... The author argues that the Green Destiny is "not so picky", and "hums away contentedly next to piles of cardboard boxes and computer parts" while Q requires special buildings and monstrous cooling installations. Yeah, so what, it is a much smaller machine.
Of course it is easier to build a smaller machine than a large machine. I would say that despite the fact that Green Destiny is 0.5% as fast as Q and is designed with power consumption in mind it is just 5 times as efficient.
Can anyone tell me (or point to a resource) how CPU power consumption depends on transistor size and clock frequency. Will a chip with a given size operating at a given clock frequency require the same amount of power, regardless of the number of transistors in it?
Re:Analysis of differences (Score:2)
Re:Analysis of differences (Score:2)
Modern computers generally include many special purpose cpu designs, e.g. the ones on the video cards, etc. just because different problem domains have different instruction mixes. (For that matter, the human brain also has specialized areas that process things like sound and vision differently.) When you design a computer you make certain assumptions about the characteristics of the problem domain that it will be working in. Things that I program this decade, e.g., rarely use floating point, but they do a lot with booleans. The ideal computer would be different from the i686, which is more designed for a standard mix of instructions. These super-computers will have been designed from the ground up to optimise the solution to some set of problems (while not being hopelessly bad for the more general run of problems).
So I suspect that the comment about the relative cpu timings for ops and flops still holds, though I could be wrong.
damn Moore's Law distortions (Score:1)
NYT:
Moore's Law holds that the number of transistors on a microprocessor -- the brain of a modern computer -- doubles about every 18 months,
Well, Gordon observed the exponential increase of transistors on ICs in Electronics, Voume 38 Number 8, April 19, 1965. There were no microprocessors until about 1970! Also, he never mentioned 18 months, thought it can be inferred.
-Kevin
more efficient chips (Score:2)
Re:more efficient chips (Score:2)
As for Motorola chips giving the best tco, I'm not sure I buy it. Most clusters built on a tight budget these days use athlons or P4:s, often in a 2-way configuration. I'm sure they'd use Motorola chips if they would provide better tco and software was available.
DSPs often rock for this kind of application (Score:2)
Teraops? (Score:1)
Since when did Flops turn into ops? It's importatnt to make a distinction between floating point operations and integer operations, right? Seems pretty dumb to me. Or is it a cracker/hacker kind of thing...
Orp
Re:Teraops? (Score:3, Informative)
Not really, for two reasons: first, supercomputer CPUs are rigged for floating point, and they do it really fast anyway. Second, a super CPU is so fast compared to RAM that the time difference between an integer op and a floating point op is almost totally amortized into the RAM access time anyway. In other words, computing a float multiplication might be 1.5 times slower than an integer multiplication, but it's still 200 times faster than a RAM access.
Then you have to work out what exactly you mean by "operation" -- a single multiplication, or a single vector instruction (which might multiply 64 numbers in one shot). It quickly becomes difficult to judge performance based on some "flops" or "ops" number. To figure out performance it's better to just run the real application and see how fast it goes...
Re:Teraops? (Score:2)
Efficiency of Programming? (Score:2)
Re:Efficiency of Programming? (Score:3, Informative)
Q machine interconnect (Score:3, Informative)
The Q machine utilizes dual-rail Quadrics card according to this [supercomputingonline.com]. Dual rail refers to using two NI cards (each one on a separate 64b/66MHz PCI bus so they can get the most out of the I/O system of the host).
I hadn't heard of Quadrics so I looked them up. At the web site you find out that they're a switched network that gets 340 MBytes per second between applications and with latencies around 3-5 microseconds. Compare this to 100Mbps ethernet, which gets 10MBytes/s and latencies of 70+ microseconds and you'll understand why the Q machine will run fine grained parallel apps that the green machine won't be able to touch. [quadrics.com]
Looking a bit through the literature, I noticed that Quadrics uses IEEE 1596.3 for its link signaling (400 MBaud, 10 bit). While they don't say it anywhere, this IEEE standard is the well-known SCI standard (scalable coherent interconnect.. pretty popular in Europe, but the US has been dominated by Myrinet..which I conicidentally use at school)..
Hope this gives some more detail about the arch..
Re:Q machine interconnect (Score:2)
From reading the beowulf mailing list archives I remember the estimated cost is about $3500-$5000 per node. Compared to about $2000 for Myrinet or Wulfkit.
Missing Important Alternatives! (Score:1)
I wish I had more details to give you. I'll add the lecturer's name when I get home from work. But yeah, this article definitely missed this aspect of research in information technology.
Good Direction Not Just for SuperComputing (Score:4, Interesting)
Lower power usage is a good direction for regular computing, too.
Many have noticed the increasing trend towards laptop computers as a primary computer for people concerned not just about portability, but also about space, electric power and noise issues in their abodes. A noisy tower and 60 lb space-hogging CRT is too uncool. Sleek LCD monitors, minimalist keyboards and no noisy cooling fans is where it's at.
And, many have noted too, that most CPU power is going to waste these days. Except for a few games and for the server environment, most CPUs spend their time waiting for someone to type in a character into MS Word or click the next link for a browser.
I think you'll see a shift to more energy efficient CPUs in a big way in a much broader market sector than supercomputing. Namely, desktop client access devices will go this route, too.
Register "Fry an egg" reference (Score:2)
What a battery! (Score:1)