Supercomputers To Move To Specialization? 174
lucasw writes "The Japan Earth Simulator outperformed a computer at Los Alamos (previously the world's fastest) by a factor of three while using fewer, more specialized processors and advanced interconnect technology. This spawned multiple government reports that many suspected would ask for more funding in the U.S. for custom supercomputer architectures and less emphasis on clustering commodity hardware. One report released yesterday suggests a balanced approach."
Cost comparison? (Score:5, Interesting)
Re:Cost comparison? (Score:5, Insightful)
Re:Cost comparison? (Score:5, Informative)
As an example, the first IBM SP "supercomputers" were essentially just common Power workstations bolted into racks, but connected with a custom made SP switch.
Nevertheless, EarthSimulator has shown what can be done by designing the entire server from the ground-up with the application in mind.
We'll have to see how ASCI Purple performs...
Re:Cost comparison? (Score:3, Informative)
Currently the interconnects are the biggest set back
Lightpath which is designed to be a "low" cost super computer, is based upon a bio-med computer out of NY (p
Mmm, Didn't Turing do that 55 years ago? (Score:2)
Good question (Score:2, Funny)
I am also wondering, which should I get? I mean, with Doom III on its way, to get decent frames should I go specialized supercomputer, or a linux beowulf cluster?
Re:Cost comparison? (Score:2)
I am guessing that cost is not the most important factor when it comes to supercomputers anyway. If you are the CIA, NOAH or biotech, yes keeping costs down are nice but performance is more of an issue. There was a similar article a few days back about the new Crays, and how Crays sales are up significantly. I can see
Re:Cost comparison? (Score:2)
Re:Cost comparison? (Score:1)
It would be just the same as a Luxury car compared to a Kia, the parts are similar, and they do similar functions, but
Re:Cost comparison? (Score:1)
The people at the large supercomputing centers, those who fund them, and the companies making supercomputing equipment, al
Re:Cost comparison? (Score:1)
10TF Opteron cluster: $10M
10TF pSeries cluster: $35M
10TF Cray X1: $70M
For this to be a relevant comparision of cost vs. TF, what it the point-to-point interconnect bandwidth, and what application are you running, what is your aggregate bandwidth.
There are more dimensions to the problem of "best super computer" than TFLOPS vs $.
performance vs cost (Score:4, Interesting)
*flops not necessarily important... (Score:4, Funny)
In that case (Score:2)
Then I'd cluster a planetload of Apple II's running Integer BASIC.
Re:*flops not necessarily important... (Score:2)
Re:performance vs cost (Score:2)
Dollars per teraflop you mean?
Once we start talking about teraflops per dollar, things get really interesting!
Benchmarking (Score:3, Insightful)
Oh! No! End of the World! (Score:4, Funny)
Let's hope this isn't tied into Nukes somehow. Wait a sec, a massive virus has already spread disabling millions of computers!
RUN HIDE! THE END IS UPON US!!!!!!!
Re:Oh! No! End of the World! (Score:5, Funny)
There's a far more important thing to worry about - could this be the end of "Imagine a Beowulf Cluster ..." jokes? After all, the phrase "Imagine a custom-built supercomputer utilising similar technology (albeit more specialised) to that found in one of those!" doesn't exactly roll off the tongue, does it?
Re:Oh! No! End of the World! (Score:2)
Re:Oh! No! End of the World! (Score:2)
Let's hope this isn't tied into Nukes somehow. Wait a sec, a massive virus has already spread disabling millions of computers!
Yeah, since we all know that any intelligent, distributed computer system's first goal is to blow itself up. Think about it: if SkyNet was running as a massively parallel program on all the PCs in the world, then by blowing up the cities, wasn't SkyNet blowing itself up? This plot hole is so big you can drive a Toyota Tu
Someone who's knowledge please tell me (Score:4, Interesting)
What is the difference between processor designed to simulate earthquakes (et al) and an ordinary, off-the-shelf processor? I mean - so they optomized floating point operations. Is that it?
Re:Someone who's knowledge please tell me (Score:4, Informative)
trigonometry? (Score:4, Informative)
Do HP's Saturn or other such special-purpose processors have hard-coded higher-level functions?
Re:trigonometry? (Score:5, Funny)
Indeed, functions Cost_an_arm_and_a_leg() and Fork_over_much_dough() are hard-coded, and always return a value of "1".
apparently the saturn is too expensive even for HP (Score:2)
Re:Someone who's knowledge please tell me (Score:1)
Re:Someone who's knowledge please tell me (Score:4, Interesting)
Can't remember the link, but somebody made a board with a few FPGA chips (I think) that cracked a 56bit DES key in a few days or less, and distributed.net had how many computers working on it for how many years?
Its all about designing the chip for the application. The ones they are refferring to would probably be designed to do mass computation of heavy physics, and only be able to run custom Nuke Simulation software.
The thing I am interested in, as an Ex Computer Systems Engineering major, is are they interested in designing and fabbing processors from the ground up, or using an assload of FPGA's or something from a company like Altera and program them..
Distributed Net (Score:2)
I got bored of it all and switched to the Intel Cancer project. More useful. Too bad it doesn't run on linux.
Yes, I'm trying to Karma-Whore (Score:2)
Re:Someone who's knowledge please tell me (Score:2, Insightful)
I think you're thinking of the EFF's DES cracking machine. It used a custom gate array chip - it took advantage of the cheapness of an ASIC, but not the extra efficiency (they couldn't afford to have the first round of chips not work properly - a large proportion of the chips didn't work properl
Re:Someone who's knowledge please tell me (Score:2)
Imagine having the fastest processesor on earth, and then take that chip and use it to do the calculation of x1++ (thats x1 = x1 +1 for you non-C'ers)and looping it a few Trillion times.
Well, regardless of the chip performing the calculation, I'd prefer doing
x1 += a_few_trillion;
instead of looping.
But yes, I get your point. "Supercomputer CPU's", like in the earth simulator, are optimised for SIMD operations. Given a vectorizing FORTRAN compiler and a BLAS library handwritten in assembler to ta
Interesting.. (Score:1)
Makes you wonder if Japan has already developed a nice powerful 128bit supercomputer to dish out to crush any competition.
Re:Interesting.. (Score:3, Insightful)
Stop being silly. The cooling requirements of an Athlon based massively powerful supercomputer would eat up the savings from using standard parts.
Seriously, though - I would guess, actually, that if one were to build a supercomputer from a "desktop" processor, the PPC970 (aka G5) chips would be a good choice. They have a solid vector unit, are RISCier, have a wider bus, and a better pi
Re:Interesting.. (Score:2)
Re:Interesting.. (Score:2)
Athlons dissupate less heat, and have higher operating temperatures than P4s.
Besides that, it doesn't take much to dissipate "extra" heat... Once you've got the fans that suck the heat outdoors, it doesn't really matter how hot that heated output air is. Of course, you wouldn't know much about that if you've only ever operated a desktop computer where the heat output is recircu
The Japanese cheated... (Score:2)
Specialization (Score:4, Interesting)
But if you want a versitile, general-purpose supercomputer, why not go with the clustering solution?
Re:Specialization (Score:3, Informative)
Because some problems don't work on clusters--things like large-scale molecular dynamcis simulations with long-range spatial interactions.
Problems that require the nodes to share massive amounts of data between nodes (gigabytes per second and up--these problems often have N^2 behaviors) don't do so well on a cluster since they tend to saturate the network. A shared-memory system, like a supercomputer,
The motivation is a tad depressing (Score:5, Insightful)
"The Earth Simulator created a tremendous amount of interest in high-performance computing and was a sign the U.S. may have been slipping behind what others were doing," said Jack Dongarra...
Graham said researchers should not overreact to NEC Corp.'s Earth Simulator that blindsided many in the high-performance computing community eighteen months ago by delivering a custom-built system five to seven times more powerful than the more off-the-shelf clusters developed in the U.S.
I don't mean to draw a crude analogy here, but I really can't help but read this and be reminded of the space race.
It took Sputnik to kickstart our spacemindedness; I for one consider it sad that a "tremendous amount of interest" -- and the funding that comes with it -- in high-performance computing seems only to have arisen/regenerated with the influence of competitive international politics. Are we really so hardly advanced that our respective national egos are still the driving force behind enthusiasm, financial or otherwise, in certain areas of science?
Re:The motivation is a tad depressing (Score:1)
Re:The motivation is a tad depressing (Score:2)
National pride has done a lot to further progress.
Re:The motivation is a tad depressing (Score:5, Interesting)
I don't really see that as bad. Yes, it may look like pure ego, but the space race gave us so much that filtered into the commercial/private sector. From advanced computers to Velcro(tm). From my perspective, being the most advanced nation in as many areas as possible is a good defense, both economically and in a homeland security sense.
Frankly, I don't want the fastest computer chips on the desktop to be designed by a company in another country (even if Intel makes them outside of the US) and I would rather that the cutting edge, be cut here, in my native country. I am sure other people in other countries feel the same, that pushed all of us to new heights. In the end, the technologies are shared anyway. Most anyone in the world can buy Intel chips, for example.
If no one cared who could race a bicycle the fastest, Lance Armstrong would be just some guy who had cancer. Instead, our desire to compete and excell and outdo our neighbors has benefited EVERYONE a great deal. It can bring out the bad side from time to time, but the benefits far outweigh the costs. This urge to compete and win is not unique to America by any means, it is part of being human: man the animal.
I say bring on the computer chip wars: Lets all compete, Japanese, Americans, Europeans, Russians, come one come all. In the end, we will all benefit, no matter who has the bragging rights for a day.
That's what I mean (Score:3, Insightful)
Good lord, why? Is it just national/istic pride? I see that as something to be outgrown with respect to driving, receiving, and appreciating scientific discoveries and technological advancements. Honestly, if Japan were to come out with, say, the first mass-produced DNA comp
Re:That's what I mean (Score:2)
Speaking as one who has played Civilisation until the late hours of the morning, I can confidently say that the country with the most advanced technology, wins.
Re:That's what I mean (Score:3, Interesting)
That term makes a lot of people uncomfortable: win.
People assume that when you have winners, you must have losers. While this is true in Civilization, it need not be true in life. It is true that when America innovates, it may benefit more, but everyone else that uses the product can benefit as well.
America put more money into developing the In
Winning (Score:1, Troll)
America and UK is not really very secure. It doesn't help to have the best defence in the world, when you're act
Re:Winning (Score:2)
I admit, it may seem like you have a point and I could tone down on it. However, I only say what I observe and have actually spoken to americans who have told me this themselves. Ie, I got this from educated american disgusted with their own society and leaders. I didn't come up with this myself..
Try to look at your own culture, what ideals you foster. It's very
Re:The motivation is a tad depressing (Score:2)
compete and excell and outdo our neighbors has benefited EVERYONE a great deal.
Well, not everyone.
There is a disproportionate number of underprivileged teenagers who believe they have a chance to play professional basketball and to earn big money. The large numbers of also-rans will have to make last-minute career plans to take into account lack of formal education: the only logical lucrative careers involve selling illegal substances.
There's a fine line between healthy competition and unhealthy compet
Re:The motivation is a tad depressing (Score:2)
Certain areas of science? Our egos, national or otherwise, are the driving force behind pretty much everything. Make a baby, jump on a grenade, write a kernel patch... Ego, pal. Get over it.
Besides, there is nothing a noble as ego driving this. Sand
Re:The motivation is a tad depressing (Score:2)
I really don't think it is ego at all... I think it's a matter of seeing something done better than you could do it, and then finally realizing how far behind you actually were...
People are always complacent until there is some competition... If company X has the most reliabile software, then everything is great. When company Y suddenly co
Relative speed (Score:2, Insightful)
Specialized always outperforms... (Score:5, Insightful)
I use custom designed amplifiers because they work better for my application. I could buy off-the-shelf stuff (~$500~$10,000 range), but that won't be exactly what I want. I use custom software too... know why? Because it's designed specifically for the job. That same software shouldn't really be used for other fields of research, neither should my amplifiers. The thing about this stuff is that it takes a lot of time to maintain (plus initial development). That means grad students, postdocs, and technicians who may spend over 90% of their time just keeping systems in working order and/or adding features. The benefits of customized hardware/software, in this instance, is worth the headaches associated with it.
All of my optics is commodity stuff (some is rare/exotic, but it's still basically black-box purchasing). I don't have the facilities to make coated optics, nor do I need anything that specialized, so... I just buy it.
When I was in telecom, we used Oracle and Solaris and Apache. It worked, and the cost of developing the same functionality in-house was ridiculously high (plus we'd never get to designing our products that sit on top of it).
Eventually, it always comes down to a comparison between the cost (man hours, equipment, etc) of custom building and of integrating stuff from OEMs.
So, the question our labs need to answer is, does clustered COTS hardware get the job done? Supplementary to that, is it cost-effective to buy/design it in light of the previous answer?
In any field where you are pushing the limits of technology, you have to make such trade-offs. Personally, I don't care who has the absolute fastest supercomputer (measured in flops, factoring-time, whatever)... what really counts is, who does the best research with the supercomputers.
Re:Specialized always outperforms... (Score:1)
That's kind of "black and white", wouldn't you say?
Oh, wait, you said (almost). Sorry, your sig almost killed it for me.
I'm already prepared! (Score:1)
It's specialty is executing x86 programs. I can also make some that specialize in PowerPC programs.
Specialization (Score:5, Insightful)
The great thing about generalized systems is you can use them to explore new areas, then design a specialized system to take advantage of specific optimizations the generalized one can't support.
I'm glad for the report suggesting a "balanced approach". I can't imagine forsaking one type of system for the other, as each has its place. (Uhoh... generalized systems have a "place"? Does that mean they're specialized at being generalized? Oh, the irony!
Duh. (Score:1)
Custom Software running on Custom Hardware [access-music.de] vs. Custom Software running on Commodity Hardware. [nativeinstruments.de]
Duh
Behind? (Score:2)
Since when is the US falling behind in supercomputing. I remember reading a list of top supercomputers in the world, and the US had 14 of the top 20. Isn't it quantity in this case, not quality? Specialization is just the case here, so what if we don't have the absolute fastest.
Invest in Cray (Score:2, Interesting)
parallel FPGA supercomputers? (Score:1)
This is kind of a compromise between each node being a slow but adaptable general purpose CPU (with maximum flexibility) and a super fast (but inflexible) ASIC.
Perhaps the big barrier to this would be making the math and physics geeks write verilog, or perhaps writing a really shit-ho
Re:parallel FPGA supercomputers? (Score:1)
I would love these chips to mass produced for desktops.
Start the Gravy Train (Score:1)
Re:Start the Gravy Train (Score:2)
Not all problems are best solved by COTS clusters. Yes they are very good for some problems but not all. Some problems are best solved with vector based systems like Cray makes. Just why do you think that a HUGE pile of PCs networked together are the end all of Supercomputing. Just as you do not want your airliner/car/pacemaker to run off a p4 and windows you might just want that supercomputer modeling the depition of the Ozone layer to be a vector system.
This greatly surprises me (Score:5, Interesting)
The main area in which we saw benefit was switching from the Portland Group Fortran Compiler [pgroup.com] to the Intel Fortran Compiler [intel.com], which cut the timestep (simulation time/real time) nearly in half.
Every cluster in the department is assembled from commodity x86 components. Groups here have been moving from proprietary Unix architectures to Linux/x86 systems and clusters. Our group started out on RS/6000s, then moved to SPARC, and is now moving to x86. In terms of price/performance there really is no comparison.
As for TCO, the lifetimes of clusters here are relatively short, one or two years at the most. Thus a high initial outlay cannot be set by lower cost of operation.
Re:This greatly surprises me (Score:3, Interesting)
Re:This greatly surprises me (Score:5, Interesting)
I get tired of seeing figures that compare peak flop rates and then don't mention that actually code usage isn't keeping up at all. The Japanese (and Europeans who are allowed to buy NEC machines) are absolutely spanking the US when it comes to fluid codes (for climate modeling for example) and it is largely because they are using vector machines with their old highly optimized Fortran (or High Performance Fortran) codes. The MPP revolution in the U.S. has been manna for the CompSci community, but has set the computational physics community back by 10 years (except for those lucky bastards with embarrassingly parallel jobs).
I would give up an unnecessary body part for an Earth Simulator.
Re:This greatly surprises me (Score:2)
I've seen complaints of Ethernet being latency.
Re:This greatly surprises me (Score:2)
Astronomers were doing this over a decade ago (Score:2)
yes (Score:1, Funny)
Re:yes (Score:1)
and i'm totally disapointed each and every time
It does matter (Score:1, Offtopic)
I wonder how rich we'll be when it finally hits us that irrepairable damage has been caused to our environment? I hope we're really rich so we can afford to buy a new Earth. Cuz we might need one by then.
Re:It does matter (Score:1, Informative)
Re:It does matter (Score:1)
1) Many environmental processes are chaotic -- that is, they strongly depend on minor variations in their parameters and
2) We have only a very rough idea of the parameters -- huge new CO2 sinks and sources are discovered often, for instance.
So it doesn't matter how large a supercomputer you build, you're still going to get garbage out. But with a fast supercomputer, it'll be detailed and precise garbage...
Re:It does matter (Score:3, Interesting)
Why oh why? (Score:3, Interesting)
From what I've heard [anecdotally] computers like the earth simulator go vastly under utilized for the most part.
So given that most nations [including the US] have budget problems specially concerning education couldn't people think of better uses for money?
And before anyone throws a "it's the technology of it" argument my way, I'd like to add that if anything I'd rather have the money spent on researching how to make high performance low power processors [and memory/etc] instead. E.g. an Athlon XP 2Ghz that runs at 15W would be wicked more impressive than a 50,000 processor super computer that runs a highly efficient idle loop 99% of the time.
Tom
Re:Why oh why? (Score:5, Informative)
From first-hand experience, such computers are running jobs almost 24x7. Due to job scheduling details there are times when some of the machine is idle, but this is still a small percentage. These machines are used for a vast array of applications, not just the advertized ones.
Now the utilization as a percentage of peak theoretical is another matter. For some algorithms, 20% of peak performance (IIRC) is considered good (ie. a particular code might only get 2 TFlops on a machine rated for 10).
Re:Why oh why? (Score:2, Interesting)
From my experience that is mostly untrue, yet widely publicized. Yes, if you look at utilization as the (used-proc*sec)/(totaltime*numprocs) the number can be relatively low (~60-70%). However, that includes system time, rebooting the machine, weekends, holidays, etc. Further, when it comes down to it the researchers need to have a reasonable turnaround time during the day for their develo
Re:Why oh why? (Score:2)
Re:Why oh why? (Score:1)
Tom
Re:Why oh why? (Score:1)
I doubt the Athlon XP would be a processor of choice for many huge clusters. First off the processor is just too fucking hot. Second it uses a heck of a lot of power.
Tom
Re:Why oh why? (Score:2, Insightful)
Each time I bought a new computer it wasn't because I wanted to rival a local supercomputer. It was because newer technology existed that was faster than what I had. The newer processor allowed me todo more.
If AMD could make a 2400+ which generates half the heat I would use it. And such a decision would have nothing todo with the local super-computer capabiliti
Re:Why oh why? (Score:1)
For example, 5000 2Ghz processors still will not make my hard disk faster or my word processor more responsive. It won't help little suzy do her art homework nor johny pirate music on kazaa.
Not that cluster research isn't important. Just that clusters and single nodes solve different problems. I me
Re:Why oh why? (Score:1)
When "prime generation" became a computing challenge [say when RSA came about] the algorithms all focused on how todo it on personal computers [or low end terminals]. In fact RSA was never geared to "massive super computers" at all.
And in fact desktop computers and super-computers solve VERY DIFFERENT problems.
Desktop computers allow AC asshats to reply with "can the manham" and others [such as me] todo work. Super
Rise of the Specialized Machines (Score:1)
Link to the Earth Simulator Center (Score:3, Interesting)
What happened to superconducting computers? (Score:1)
Could someone with knowledge on supercomputers tell me the story here. thanks.
superconductor computer petaflop [superconductorweek.com]
Not just for climate modeling (Score:3, Informative)
There seems to be an impression in some comments that this machine has some sort of special design that's only applicable to climate modeling problems. In fact, this is a vector-based supercomputer, applicable to any problem where you need to perform vector operations (i.e., operating on large arrays of numbers in parallel).
Certain numerical operations can be performed blindingly fast on these types of machines. Each arithmetic processor on this machine has 72 vector registers, each of which can hold 256 elements. Then you can perform operations on all 256 elements of 1 or more registers simultaneously! If the algorithm can keep the vector units fed, they will scream.
Since keeping data flowing to the processors is critical to speed, the high-speed interconnects (~12GB/s) are a must for any problem that is not completely localized. It's all about matching the problem to the hardware. There may well be problems for which a commodity cluster just can't get the job done like this can. Remember that each node of a cluster consumes power, produces heat, and takes up space. The raw cost of hardware is not the only consideration.
NSA? (Score:2)
Nuts to that (Score:3, Informative)
The fact is, systems like ASCI Q and the Earth Simulator just aren't practical. They may look great on paper, but there's not much that they can do that can't be done on x86. Given the choice between paying over a hundred million for a proprietary cluster that might not even be all that reliable (*cough*Q*cough*) and requires expensive software and maintenance contracts, we see companies like Linux Networx offering high-power clusters on common hardware and free software that are a fraction as expensive.
As far as reliability goes, don't get suckered into thinking that proprietary and expensive mean quality. Q's failure rate [sandia.gov] is almost as high as my old Windows '98 machine hahaha. With the exception of a few missing chillers, Pink [lanl.gov] seems relatively healthy with only a few minor failures.
If CRAY and NEC want to get into a pissing contest in specs, that's fine. If they offer something that Intel can't, more power to them. Otherwise, the five organizations in the world that own their systems can be proud that they have the most powerful computer on paper for a year or two before someone builds a cheaper x86 cluster that matches or out-performs them.
Yes, they DO offer something that "intel" can't. (Score:3, Insightful)
I have the feeling the DOE (nuclear weapon simulation etc) simulation program is not going anywhere near as well as it was sold.
Massive commodity clusters boast big numbers but they do not boast great useful throughput of USEFUL RESULTS. (also with massive clusters
you have to be able to deal with inevitable hardware failures).
You have a certain fluid problem---there is a certain speed of sound, and a certain physical geometry. What you want to do is to be able to simulate the
Re:Nuts to that (Score:3, Insightful)
Re:Nuts to that (Score:2)
Big Whoop. My $200 desktop computer is faster than the super-computers of just a decade ago... What good does that do exactly?
My point is that you can't compare two systems unless they were installed in the same time-frame.
Also, saying that a cluster of comodity hardware is better than a supercomputer
Re:Nuts to that (Score:2)
Idle Time (Score:1)
Can't remember where I heard that though.
If these big supercomputers are so underutilized, why not run some public distributed projects on them in their spare time. (SETI, distributed.net, folding@home etc)
seti and custom computers (Score:1)
In the real world its a bit more complicated... (Score:4, Insightful)
Real answers are always more complicated. For example: the equations needed for nuclear simulation will probably require dedicated hardware (as the need for protein folding has lead to Blue Gene) to achieve the results that the Pentagon needs. But for many super computing tasks, the flexibility of COTS clusters will still be compelling, especially for areas where the algorithms are not yet fully developed (e.g. brain simulation). An interesting keynote at OLS 2003 argued that (some of) the problems are not going to be the local computing power but the need to move large quantities of data between research labs across the world and combine computational systems using the 'grid.' [globus.org] (For a down home examples of problems that have been successfully tackled through course granular distribution just look at SETI@Home [berkeley.edu] and Distributed.Net. [distributed.net] So its not just the flops anymore...
Specialized computers (Score:2)
The primary uses of supercomputers that I've read about are to perform simulations of real-world phenomena. It might be possible to contruct circuitry that makes a computer more efficient at a series of specialized computing tasks. It's arguably more efficient to not use supercomputers.
(DANGER - intentional lack of sentitivity below)
Examples:
1. Genomic research - inject experimental drugs into real-live humans. If a higher p
Re:I don't care what approach we use (Score:1)
But japs don't play Quake3!
They have more refined games, like Xenosaga.
</sarcasm>
Re:Ooh this is bad... (Score:2)