The Amazing Shrinking Supercomputer 210
mE123 writes "It would seem that IBM is trying to change what we all think of as super computers. Their new Blue Gene family of super computers is meant to be 6 times faster, consume 1/15 of the power and be 1/10 the size of current models. The prototype is already number 73 (with 2 teraflops) on the list of the most powerful super computers and it's only "roughly the size of a 30-inch television". They are hoping to be able to make it up to 360 Teraflops using only 64 racks." We covered this a bit earlier, but without the level of details.
Priorities.. (Score:5, Interesting)
Re:Priorities.. (Score:4, Insightful)
That aside, I would happily take a computer the size of a 30" TV if it was SUPER!
Re:Priorities.. (Score:5, Insightful)
Very few businesses/institutions can afford, nor need an Earth Simulator. Big power hungry supercomputers need specialised buildings with sufficient power supply and heat dissipation capabilities. By creating a small, power efficient supercomputer which can simply be plugged in in the server room, they open up an entirely new market.
Re:Priorities.. (Score:5, Insightful)
Modular installation = better able to match requirements without having to build entire system from scratch = more cost effective solution for some (most?) customers.
I think the "Imagine a Beowulf cluster of these" joke may actually pretty close to the point!
=Smidge=
What does NEED mean? (Score:5, Insightful)
If supercomputers were ubiquitous, more uses would be found. So I don't see how "need" comes into the picture. Now who can afford one? That is a good question. If they were affordable you'd see needs popping up all over.
Re:Priorities.. (Score:5, Insightful)
You need small power-efficient supercomputers so that you don't need a dedicated 100MW coal-fired power plant next door for each 10 teraflop building.
Imagine the cooling system necessary for a building which dissipates the energy normally used by a small city!
This is why bluegene is cool; they realize that at the high end, power is going to become the limiting factor, and they designed their architecture accordingly.
Bobby
Re:Priorities.. (Score:2, Insightful)
Re:Priorities.. (Score:5, Insightful)
Re:Priorities.. (Score:4, Informative)
The node to node density, though, is very high. The maximum cable length is 8m.
Small Size Critical As Speed Increases. (Score:5, Informative)
Why do we need to have small, power-efficient supercomputers? Isn't the main goal of the supercomputer to be fast as hell? Granted, if this can be achieved while simultaneously minimizing power and size then by all means go for it. However, as stated by my parent, what sacrafices are being made?
The increase in speed is related to the reduction in size.
For a moment, let's pretend that electricity within a wire travels at the speed of light.
Now, let's pretend that we wish to carry pulses of electricity from one end of the computer to the other at a very high speed.
At some point, the distance the signal has to travel will become significant to the speed of the computer.
This is already happening in PCs. If you take a close look at the motherboard in your computer, chances are you'll see weird places where the traces just zig-zag back and forth (notice the angles on them, that's not by accident either, but I'm not going to try to explain a fourth-year university course in microwave and RF design here). These zig-zags add length to the traces so that they have the same length as other traces within the same bus, and all the signals on that bus arrive at the same time. Think of them as being "equal length headers", if you're into the throb of a big-block V8.
Length of interconnecting wires is non-trivial at this point. Stray capacitance and inductance caused by any conductor are non-trivial at this point. As a result, a terrific limiting factor to the speed of a computer is now its size.
Power consumption is also related. Modern ICs are made of millions of MOSFET transistors which behave as switches. These switches are not perfect: during the transition between a logic high and a logic low, the transistors spend time in the linear state where they are resistive. As a result, they waste energy as heat.
Stray capacitance and inductance - even within the junctions of the transistors themselves - slow their ability to switch instantaneously. As a result, they must be made as small as possible to reduce capacitance (C) and inductance (L).
This also explains why newer generations of a processor can run faster than their predecessors: smaller and smaller features on the IC mean less stray C and L, which means that the transistors can switch states faster, which means that they spend less time in the linear state and therefore heat up less. This means less energy wasted as heat.
Not sacrifices, but pure marketplace tradeoff. (Score:3, Insightful)
Answer: To do a very sophisticated simulation that would be too difficult or costly to conduct in real life.
But if the supercomputer is so expensive to purchase and maintain, it might be easier and cheaper to use CAD and rapid prototyping to make a few doo-dads and knock them into each other for real, as an example.
So if the supercomputers can't scale with the rest of computing or manufacturing, then no one will buy them (no one who doesn't want to get fired
Re:Priorities.. (Score:3, Insightful)
It brings them into reach of small engineering firms and university engineering, science, and math departments.
Imagine if supercomputing goes the way of the PC: affordable and ubiquitous to those who want them. It is arguable that today's gigaflops CPUs are already supercomputers, but I guess people are always striving for more.
Re:Priorities.. (Score:2)
[PROCESSING]
Re:Priorities.. (Score:3, Interesting)
The other difference and potential problem when compared to a cluster is that in a cluster, if one machine fails, there's usually measures to just know that one machine out the network and carry on
Re:Priorities.. (Score:5, Interesting)
Do you need to find the cure for cancer via simulations faster or do you need to send a machine up on a 747?
Different needs, different solutions.
Re:Priorities.. (Score:3, Insightful)
If it's 1/10 the size and 1/15 the power and it's still faster, then we can stick 15 of them in a room, get the same power consumption, and have a larger "much faster" computer that you're looking for. This seems like a win-win direction to go in, for IBM.
~Berj
Re:Priorities.. (Score:2)
You know, being 6-times faster at 1/10 the size is actually being 60-times faster, IMO. It's definitely win-win.
Re:Priorities.. (Score:5, Informative)
In theory, you could just keep adding more and more nodes to an existing system, and as long as your interconnects were good enough, you could scale.
But in practice energy consumption (and getting rid of the waste heat afterwards) will hit you before you can get much futher that we are today. The Big Mac G5 cluster in VA, for example, required custom cooling systems because conventional aircon units simply couldn't handle the load.
As a result, IBM's work is *vital* for making faster supercomputers -- and the improvements they're claiming are very impressive indeed.
Re:Priorities.. (Score:2)
One of todays problems is that we are very inefficient in out chips. Think about Intel and AMD useing 80-100 watts of power, while Transmeta new 1.3 GHz uses only 7 watts. It is possible to build a parellel system using these and have the system be cheaper to build and run than with Intel/AMD.
While this is going to be a real killer in terms of speed, it will hopefully make it more profitable for IBM as more companies will be able to afford these.
Re:Priorities.. (Score:2)
There are complications, but that's the general procedure. If you made a pentium the size of an IBM 360, it would probably be slower than a 360. (I'm assuming that you just scaled everything up...switches, path lengths, etc.)
The real trick is when they get something small enough and organized enough that they can mass produce them. Then the price starts dropping too. This pr
The most important question (Score:2)
Somebody needs to ask, and it may as well be me. I leave the obligatory Wolfpack question for others (Im not greedy, after all).
Supercomputers in a tower case... (Score:2, Redundant)
Re:Supercomputers in a tower case... (Score:2, Insightful)
Re:Supercomputers in a tower case... (Score:2, Funny)
Re:Supercomputers in a tower case... (Score:2, Funny)
Re:Priorities.. (Score:3, Informative)
They once made a machine out of FPGA's. It worked by evolution: It would rearange different FPGA's and work out which gave the correct answer the quickest, and learn from there. Basically, it was pretty slow the first time it tried something. But if you let it learn for a while, you got supercomputer performances out of a tower-sized box (On the specific set of tasks it has learned, anyways).
Its good for plenty of fixed-task things: Medical imaging, software-defined DSP, scientific computing, that
Finally! (Score:5, Funny)
Clif
Blogzine.net [blogzine.net]
Fortress of Insanity [homeunix.org]
Scale and costs (Score:5, Interesting)
Something that fits into the space of a 30" TV set (how about dimensions, guys ?) is presumably about half to 1/3 a standard rack in a co-lo. 2 Teraflops of processing power ought to be able to comfortably shift the bottleneck to the bandwidth, even for database-orientated sites
I think people's cost expectations are going to be significantly impacted by the size of this - if it's small, it must be cheap, right ? (wrong, but try telling them...)
Fantastic acheivement, btw, kudos to the man in blue
Simon
Re:Scale and costs (Score:5, Informative)
Artists' rendition of a deployed cluster at bottom (Score:2)
Re:Scale and costs (Score:2)
Re:Scale and costs (Score:2)
A gigantic RAID 1+0 however...
Re:Scale and costs (Score:2)
A more feasible route might be to run round-robin DNS to the various nodes in some fashion, to distribute the load. Instant high-availability for even heavily loaded sites.
Yes, I'd expect a damn sight more RAM per node than 64M. Why on earth put only 64M on a node - t
Re:Scale and costs (Score:4, Interesting)
More memory would be a waste most of the time.
Most of the challenge in super-computing is now in figuring out how to chop up the workload, and to efficiently deliver it to the processors (and get back the results). It is a very different process from the days of the Cray's (1-3).
Re:Scale and costs (Score:2)
With this paradigm, I don't see why you'd want to restrict the memory per node. The singl
Re:Scale and costs (Score:2, Insightful)
I agree with Prof. Frink However... (Score:5, Funny)
Re:I agree with Prof. Frink However... (Score:5, Interesting)
Kings in Europe are no longer Rich... at least not compared to US tycoons.
Re:I agree with Prof. Frink However... (Score:2)
Supercomputing for small business (Score:3, Interesting)
Re:Supercomputing for small business (Score:5, Insightful)
You can either choose price, or speed, but not both. So do you want something for 30-60k? Or do you want something top 100?
Your small business should take some economics
Re:Supercomputing for small business (Score:2)
Sorry for the confusion, but i was implying quantity demanded. (quantity he sells)
Here is how long you can execpt to wait (Score:3, Informative)
I recently did a search of top500.org [top500.org] which has specs back to June 1993 [top500.org] and up to June 2003 [top500.org]
BTW WHO THE HELL BROKE top500.org [top500.org]!?!? This site used to be easy to use and informative, now it is a banner add hell, that obscures the info you used to be able to get to easily, with many broken links and apologies for works in progress.
Anyway I digress, the point is that in 1993 the fastest computer was the TMC at Los Alamos with GigaFl
impressive, but is it as impressive as it sounds? (Score:4, Interesting)
If you read the press release, they claim that previous 2 teraflop machines fill up entire rooms, with more than a dozen racks. I'm not so sure this is the case: for instance, Apple claims 798 gigaflops to a rack with the Xserve [apple.com]; by my reckoning that works out to needing 2.5 racks to get 2 teraflops. And that's just with dual 1.3 GHz G4 CPUs; I'd imagine there is an upcoming Xserve rev featuring dual 2.0 GHz G5's.
Don't get me wrong, it's still an impressive achievement (especially if it uses as much less power as claimed.)
Sigh. Pravda nyet Isvestia, Isvestia nyet Pravda (Score:5, Funny)
Yes, one day supercomputers will fit into your wristwatch! What's more, they already do! If you use an ancient measure from, say, 50 years ago.
It's very disappointing to see technology always reduced to whizz-bang figures that are in fact meaningless. What about the impact on our society? What about the capability for good and for bad? What do "good" and "bad" mean, anyhow? How do I know I even exist? What does "I" even mean?
Now, that kind of stuff is worth discussing.
OK, go ahead and mod me as a troll now, if you can't think of an intelligent answer.
Re:Sigh. Pravda nyet Isvestia, Isvestia nyet Pravd (Score:2)
What this extra speed is used for is important. But it is a separate issue.
Re:Sigh. Pravda nyet Isvestia, Isvestia nyet Pravd (Score:2)
"I" then, is a word refering to the reflection of your "self" as seen through the lense of the world you've been conditioned to accept.
"I" must exist, however, because with
Re:Sigh. Pravda nyet Isvestia, Isvestia nyet Pravd (Score:2)
I will provide the answers to the important and serious questions posed above; for I believe in providing light where thier is darkness...
What about the impact on our society? Nothing. It's just a computer dude!
What ab
Re:Sigh. Pravda nyet Isvestia, Isvestia nyet Pravd (Score:2)
italics?
Pravda (Score:2)
Good going until this point.
If Darwin was right, we're just replicators, the concept of "I" is a trick of perspective created by our minds in order to improve our performance, and intelligence is limited by the awareness horizon that, once we cross, we realize it's all a big joke and we self-destruct. Philosophically speaking, of course.
Sigh. At least we'll be able to buy brain-sized supercomputers to replace our auto-anihilating intelligence organs.
Re:Pravda (Score:2)
Re:Pravda (Score:2)
Re:Pravda (Score:2)
Your DNA does not want anything. If you impregnate every hot chick you see, there's a better chance that rough copies of your DNA will be around in the future. Your DNA might have something to do with your desire to impregnate hot chicks, and that effect might get passed on to your descendants, whereas the "don't impregnat hot chicks" gene is less likely to get passed on, and thus... well, you know, naturla selection, evolution, etc.
But your DNA does not want anything, and evolution works anyway. It can
Self-destruction, a field-test (Score:2)
The "self-destruction" is implicit in the surrender of the "I". By definition.
However, that is neither good nor bad, since these terms can't even be defined without recourse to the "self" we've just destroyed.
The temporary collection of genes that has registered under the Slashdot alias of HeironymousCoward, and which we can abbreviate as "I" for the purposes of discussion, thinks that what is left after des
mini-super vs. true-super (Score:2)
We went through this process in the late 1980s with the cray-clones, crayettes, etc. You got like a fifth of a power of Cr
SHOCK! (Score:5, Funny)
In other news, the price of petrol increases.
Size not accurate (Score:2)
Re:Size not accurate (Score:2)
What about distributed apps? (Score:4, Insightful)
That being the case, why aren't distributed apps considered as part of the Super Computer list? I mean, SETI@Home has got to be far and away, #1 in terms of computing power. Granted, it's not in 1 integrated piece of hardware, and Berkeley doesn't own all the hardware, but I still think these things ought to be considered, at least to make it more realistic about who actually has the most computing power.
Just my little rant.
Distributed apps aren't the problem (Score:4, Informative)
Most of the tasks you pick a supercomputer for aren't things you can cut up into a thousand chunks and let every computer finish it's chunk of the problem independently. In particular, the benchmarks (LINPACK) that determine who goes where on that supercomputer list generally measure a computer's performance at big linear algebra problems (which are what takes up most of the compute time for huge classes of real problems), and for those problems every node needs to share results with many other nodes after essentially every iteration: this means you need high bandwidth and very low latency connecting the nodes.
Now, the supercomputer benchmarks may make things worse than they have to be: according to this [top500.org] they're measuring performance on dense matrices (where every node needs to talk to every other node), whereas many real world problems can be discretized into very sparse matrices (where each node only has to talk directly to a few of the others) instead - still, even in the sparse situation you want your computers to be separated by microseconds across your high speed interconnect rather than milliseconds across the low bandwidth internet.
Bad joke... (Score:3, Funny)
Would sales tax on these things be called a "Blue Gene Levy"? Hahahaha. Horrible, I know. ;)
Re:Bad joke... (Score:2)
Many years ago Cray... (Score:5, Insightful)
Re:Many years ago Cray... (Score:3)
A Cray is basically "overclocked" (or, more accurately, clocked to the theoretical max) when we ship it.
Decreasing size is certainly a very good way to increase clock rate. Making a machine faster, however, usually involves more than just the clock rate including:
- changing chip fab technology
this is strange... (Score:2, Redundant)
I submitted this story 10 days ago, November 14, 2003, the day IBM published their press release online, almost verbatim as i quoted the same material, and it was rejected!!
Not only that this strikes me as old news, but its publication now completely baffles me.
Here's an idea: (Score:2, Insightful)
Later on after about 20 more people submitted it, they gave in and posted it directly. They generally attribute the person who causes them to post it, rather than a group.
Re:this is strange... (Score:2)
Well, that's it for me; i won't be wasting my time again submitting stories to slashdot.
gee (Score:2, Redundant)
Small = Dense = More power (Score:5, Interesting)
We're packing 1024 compute nodes (each node having two CPU cores) into a rack. The nodes are small and based on the PowerPC 440, with beefed up floating point. It has to be air cooled - water is a PITA.
The finished machine will still be quite large - 64 racks with miles of cables. And that doesn't count disk drives. There isn't a single disk drive on the thing - the customer provides the filesystem, which will also be another beefy set of machines. It requires a new building.
The machine featured in the article is just half a rack. It is still respectable, coming in at #73 on top500.org. Might be quite useful for business and small scale scientific in it's current form. (This is far more than my alma matter had access too.)
Re:Small = Dense = More power (Score:2)
The original artilces from AP (news.yahoo.com and many other places) mentioned something about the walls/sides being tilted 17 degrees to speed up airflow. True or just a hoax?
Can you comment on that or is this covered by your NDA?
Thanks in advance.
Thats one hell of a P0rn server (Score:3, Funny)
hmm... how many Counterstrike servers will it run at the same time...
(Note: The above is meant to be foolish and meaningless. Any other interpritation is pure coincidence. The names have been changes to protect the inocent)
Blue Gene (Score:2, Informative)
Difficult to program (Score:3, Informative)
It is much more difficult to use them for most applications most of us can think of. For example, VLSI CAD software (simulation/analysis/synthesis) is very compute intesive. However, these systems usually do not even take advantage of the multiple CPUs in a typical general purpose SMP system. You have to manually partition designs and sometimes loose the advantages of global optimization.
So don't run and order your new Blue Gene yet :)
The wait is over (Score:4, Funny)
The more things change... (Score:4, Insightful)
Re:The more things change... (Score:3, Interesting)
So you won't se supercomputer under your desk simply because as long as there is space it's possible to build a larger computer that do things that your computer can't do.
BlueGene/L presentation (Score:3, Informative)
Old OLD news.... (Score:2, Informative)
Big screen #1 [ugwarehouse.org]
40" HDTV [ugwarehouse.org] and a A size perspective [ugwarehouse.org]
We have DEFINITELY been down this road before folks. I don't see why it's so hard to do this, unless you're using COTS components. Hence, the point of "engineering" - not cramming a bunch of stuff in boxes/packages into bigger boxes and packages.
Wait till they "upsize" it. (Score:3, Informative)
IBM knows what it is doing.
make it smaller.... (Score:2, Funny)
Can't compete. (Score:3, Interesting)
Efficiency is only half the problem (Score:5, Insightful)
While progress in making supercomputers more efficient in terms of power usage and space, the widespread adoption of supercomputers is still really hampered by functionality. The majority of supercomputers are used for modeling, simulations, or code breaking. This limits their usage to academic and government institutions. These break through only help those kinds of institutions afford a super computer. I would think that most businesses have little use for that kind of raw computing power. Their computing bottlenecks are more related to transactions per time as opposed to calculations per time.
Re:Efficiency is only half the problem (Score:2)
Buisnesses dont have computing bottlenecks, they only have IO and Disc bottlenecks. Even the 700k tpm manchines with 70+ raid channels dont have 100% cpu load...
Traditional Joke: Imagine a Beowulf cluster of... (Score:2)
I do think that this is the wrong apporach from the long term, but for now...
What I see for the long term is some chip maker implementing a complete Beowulf node on a chip. And using a Beowuld bus for the connection lines...though you might design it so that it could link directly to "nearest neighbors" to the four sides (cor
I'll take two... (Score:2)
'Super' computers vs 'Intelligent' computers (Score:2)
For example say you have 2 graphics packages install
Re:new supercomputing challenge (Score:2)
Re:new supercomputing challenge (Score:2)
Try pissing holding onto a gas station key with its tire, and your cellphone sized supercomputer with a boulder attached to it.
Re:PERSONAL SUPERCOMPUTER (Score:2, Informative)
Cray made the Cray EL series from '94-'97, they were a "deskside" computer. See here for more info [ucar.edu].
HANDHELD SUPERCOMPUTER (Score:2)
Re:Sprinkler (Score:5, Interesting)
We have a "dry" system, where you have to break 2 heads in separate zones for the system to flood, the room has to be almost 200F for water to actually flow.
Since the pipes are dry normally, it doesn't hurt at all if you accidentally wipe out a sprinkler head with a relay rack, or rip a pipe down in the ceiling. The rest of the building will be deeply engulfed in flame, and the computers will have already melted from ambient heat before the water system in that room kicks in.
In fact, my guess has always been that the reality, even with halon, is that halon/foam doesn't do you any good when the rest of the building falls down on top of your spiffy computer room.
The problem is, what happens if there's a LOCALIZED fire in that room. What if the PDU explodes into a million sparking pieces. What if the UPS explodes, bad things could happen. Of course, in either of those cases, the "bad things" would include probably sending a fairly deadly spike into the machines, frying them to the point that we don't care if the water is flowing or not.
Top 500 Supercomputers (Score:2, Informative)
Re:where is this list anyway? (Score:4, Informative)
Re:We hear from them only during development ... (Score:2)
If you think they aren't making a revolution in science, try actually researching what these machines have been used for.
As for upgrade paths, look for instance to the recent announcements regarding SGI's single system image super computer sold to NASA, which started out at 128 CPU's was quickly upgraded
Re:We hear from them only during development ... (Score:2)
I hear you. But the same thing happens with all research in the U.S. Take the ITER fusion project. Its going to cost a bomb and for what? So that the U.S. can have a "cheap" unlimited source of power which no third world country in the world can afford because of its technological complexity. Yet there is an unlimited source of energy right outside my window. Its called the "sun". Sad.
Re:We hear from them only during development ... (Score:2)
Re:We hear from them only during development ... (Score:2)
Re:We hear from them only during development ... (Score:2)
Re:We hear from them only during development ... (Score:3, Insightful)
What's this "we", white man? Are you a supercomputer developer? Because, I mean, if Seymour Cray rose from the grave, got a slashdot account, and wrote your post, I would believe that you have a point. But he didn't, so you don't.
"Supercomputers", a set which today includes the supercluster computers along with traditional supers (like Cray's
Re:We hear from them only during development ... (Score:3, Interesting)
It's the research that trickles down that can at some point provide solutions FOR EVERYONE(well not everyone at the same time). This is so obvious, it should'nt even have to be explained.
I mean, where do you think technology comes from. Come on, I have a Palm that has 75 times the clock spe