Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware Technology

Supercomputers To Move To Specialization? 174

lucasw writes "The Japan Earth Simulator outperformed a computer at Los Alamos (previously the world's fastest) by a factor of three while using fewer, more specialized processors and advanced interconnect technology. This spawned multiple government reports that many suspected would ask for more funding in the U.S. for custom supercomputer architectures and less emphasis on clustering commodity hardware. One report released yesterday suggests a balanced approach."
This discussion has been archived. No new comments can be posted.

Supercomputers To Move To Specialization?

Comments Filter:
  • Cost comparison? (Score:5, Interesting)

    by Tyrdium ( 670229 ) on Wednesday August 13, 2003 @05:19PM (#6690113) Homepage
    Ignoring size, how does the cost of a cluster of fewer, highly specialized computers (with special interconnects, etc.) compare with that of a cluster of more, less specialized computers?
    • by ybmug ( 237378 ) on Wednesday August 13, 2003 @05:35PM (#6690238)
      The problem is that it may not be possible to match the computation of a cluster with specialized interconnects using just commodity hardware no matter how many machines you throw at it. If a simulation has a low computation to communication ratio it's scalability is bound by the perfomance of the interconnects. In this case throwing more commodity machines at the problem will actually increase the total time required to run the experiment.
      • Re:Cost comparison? (Score:5, Informative)

        by mfago ( 514801 ) on Wednesday August 13, 2003 @05:59PM (#6690373)
        The interconnects are (usually) not commodity parts -- just the servers.

        As an example, the first IBM SP "supercomputers" were essentially just common Power workstations bolted into racks, but connected with a custom made SP switch.

        Nevertheless, EarthSimulator has shown what can be done by designing the entire server from the ground-up with the application in mind.

        We'll have to see how ASCI Purple performs...
        • Re:Cost comparison? (Score:3, Informative)

          by theedge318 ( 622114 )
          I recently had the opportunity to speak with the designers of ASCI Purple and Lightpath ... and there is definitely a reason that they cant use stock parts.

          Currently the interconnects are the biggest set back ... currently all of the supercomputers are designed with two dimensional floorplans ... with the goals of minimizing distances between each various aspects of the computer throughout the room.

          Lightpath which is designed to be a "low" cost super computer, is based upon a bio-med computer out of NY (p
        • With the Bombe and Colossus machines?

    • by Anonymous Coward
      Good question there man.

      I am also wondering, which should I get? I mean, with Doom III on its way, to get decent frames should I go specialized supercomputer, or a linux beowulf cluster?

    • Ignoring size, how does the cost of a cluster of fewer, highly specialized computers (with special interconnects, etc.) compare with that of a cluster of more, less specialized computers?

      I am guessing that cost is not the most important factor when it comes to supercomputers anyway. If you are the CIA, NOAH or biotech, yes keeping costs down are nice but performance is more of an issue. There was a similar article a few days back about the new Crays, and how Crays sales are up significantly. I can see
    • While the one-time hardware cost is clearly going to significantly lower for a cluster of commodity machines, it is equally clear that the ongoing expenses of space occupied, power and cooling will favor custom hardware.
    • Chances are the Super Computer will cost a great deal more. With a cluster of off-the-shelf components, the cost of R&D is spread over the mass produced parts, but with a specially designed processor, either for one or a few installations, in order to at the very least recoup the costs involved, the cost of all of the time and research that went into it is only spread across a few parts.
      It would be just the same as a Luxury car compared to a Kia, the parts are similar, and they do similar functions, but
    • This is a very complicated question, the answer to which varies with the evolution of certain technologies. Some factors include processor speeds (in FLOPS), memory latency and bandwidth, system bus/interconnect latency and bandwidth, mass storage bus bandwidth. It actually tunrs out to be kind of a linear programming problem. There are some crucial software factors involved as well.

      The people at the large supercomputing centers, those who fund them, and the companies making supercomputing equipment, al
  • performance vs cost (Score:4, Interesting)

    by harmless_mammal ( 543804 ) <jrzagar@yahoo. c o m> on Wednesday August 13, 2003 @05:19PM (#6690117)
    Teraflops per dollar is important, let's not forget that.
  • Benchmarking (Score:3, Insightful)

    by Anonymous Coward on Wednesday August 13, 2003 @05:20PM (#6690120)
    How does one go about bench marking a super computer specialized to do a certain task versus cheap computers in a cluster. Now we need to spend more money to develop specialized super computers even though the case scenerio presented in japan might not hold true to other applications? Seems a little too soon to start making recommendations
  • by Goalie_Ca ( 584234 ) on Wednesday August 13, 2003 @05:20PM (#6690129)
    Skynet had 60 Teraflops IIRC and they're talking about 100!

    Let's hope this isn't tied into Nukes somehow. Wait a sec, a massive virus has already spread disabling millions of computers!

    RUN HIDE! THE END IS UPON US!!!!!!!
    • by BabyDave ( 575083 ) on Wednesday August 13, 2003 @05:39PM (#6690253)

      There's a far more important thing to worry about - could this be the end of "Imagine a Beowulf Cluster ..." jokes? After all, the phrase "Imagine a custom-built supercomputer utilising similar technology (albeit more specialised) to that found in one of those!" doesn't exactly roll off the tongue, does it?

    • The topic is about supercomputers, this post is about supercomputers, and it's marked offtopic. Which of these three doesn't belong?
    • Skynet had 60 Teraflops IIRC and they're talking about 100!
      Let's hope this isn't tied into Nukes somehow. Wait a sec, a massive virus has already spread disabling millions of computers!

      Yeah, since we all know that any intelligent, distributed computer system's first goal is to blow itself up. Think about it: if SkyNet was running as a massively parallel program on all the PCs in the world, then by blowing up the cities, wasn't SkyNet blowing itself up? This plot hole is so big you can drive a Toyota Tu

  • by Raul654 ( 453029 ) on Wednesday August 13, 2003 @05:21PM (#6690132) Homepage
    The Japan Earth Simulator outperformed a computer at Los Alamos (previously the world's fastest) by a factor of three while using fewer, more specialized processors...
    What is the difference between processor designed to simulate earthquakes (et al) and an ordinary, off-the-shelf processor? I mean - so they optomized floating point operations. Is that it?
    • by Boone^ ( 151057 ) on Wednesday August 13, 2003 @05:32PM (#6690210)
      Ordinary off the shelf microprocessors don't have the bandwidth to memory or bandwidth to other processors to simulate complex problems. NEC's machine is a Vector architecture (SX-6), similar to the kind you see from the Cray X1 [cray.com]. Vector architectures are a SIMD-style processor.
    • trigonometry? (Score:4, Informative)

      by SHEENmaster ( 581283 ) <travis&utk,edu> on Wednesday August 13, 2003 @05:33PM (#6690221) Homepage Journal
      I assume that hard-coding trig functions into the tertiary processors would be advantagious for this. I know it violates the spirit of RISC in general-pupose computing, but for such a large scale system with so many processors it coould be advantagious.

      Do HP's Saturn or other such special-purpose processors have hard-coded higher-level functions?
    • They are vector processors, which means they are optomized for operating on matrices.
    • by QuantumRiff ( 120817 ) on Wednesday August 13, 2003 @05:40PM (#6690257)
      Generic processors are ineffecient. Imagine having the fastest processesor on earth, and then take that chip and use it to do the calculation of x1++ (thats x1 = x1 +1 for you non-C'ers)and looping it a few Trillion times. Then take a processor that is desinged specifically to do x1++, and only that calculation. You can run a hell of alot faster, you don't need to worry about having to multiply, devide, etc.. they're smaller, and cooler, and after the cost of engineering them, cheaper.

      Can't remember the link, but somebody made a board with a few FPGA chips (I think) that cracked a 56bit DES key in a few days or less, and distributed.net had how many computers working on it for how many years?

      Its all about designing the chip for the application. The ones they are refferring to would probably be designed to do mass computation of heavy physics, and only be able to run custom Nuke Simulation software.

      The thing I am interested in, as an Ex Computer Systems Engineering major, is are they interested in designing and fabbing processors from the ground up, or using an assload of FPGA's or something from a company like Altera and program them..

      • Um, I think you're thinking of the RC-64 and 128 projects, which took years. Don't quote me on this, but back when I was actually running a D.net client, they had talked about doing 56 bit contests, and they usually only lasted a few days, then everyone would go back to doing the 128 bit contest.

        I got bored of it all and switched to the Intel Cancer project. More useful. Too bad it doesn't run on linux.
      • The Project i was talking about was This one. [eff.org] Sure, it cost a cool 1/4 of a Mil at the time, but this was back in 98 and 99. costs have dropped, prossessing power is increased, blah, blah, blah..
      • Can't remember the link, but somebody made a board with a few FPGA chips (I think) that cracked a 56bit DES key in a few days or less, and distributed.net had how many computers working on it for how many years?

        I think you're thinking of the EFF's DES cracking machine. It used a custom gate array chip - it took advantage of the cheapness of an ASIC, but not the extra efficiency (they couldn't afford to have the first round of chips not work properly - a large proportion of the chips didn't work properl


      • Imagine having the fastest processesor on earth, and then take that chip and use it to do the calculation of x1++ (thats x1 = x1 +1 for you non-C'ers)and looping it a few Trillion times.


        Well, regardless of the chip performing the calculation, I'd prefer doing

        x1 += a_few_trillion;

        instead of looping. ;-)

        But yes, I get your point. "Supercomputer CPU's", like in the earth simulator, are optimised for SIMD operations. Given a vectorizing FORTRAN compiler and a BLAS library handwritten in assembler to ta
  • So does this mean that supercomputers will be developed without following current hardware standards or is there a new standard being formed for supercomputers? So I guess my Athlon XP won't fit in their CPU socket will it? Damn... So much for cheap AMD CPUs for supercomputers.

    Makes you wonder if Japan has already developed a nice powerful 128bit supercomputer to dish out to crush any competition. :P
    • Re:Interesting.. (Score:3, Insightful)

      by RevMike ( 632002 )
      So I guess my Athlon XP won't fit in their CPU socket will it? Damn... So much for cheap AMD CPUs for supercomputers.

      Stop being silly. The cooling requirements of an Athlon based massively powerful supercomputer would eat up the savings from using standard parts.

      Seriously, though - I would guess, actually, that if one were to build a supercomputer from a "desktop" processor, the PPC970 (aka G5) chips would be a good choice. They have a solid vector unit, are RISCier, have a wider bus, and a better pi

      • You have to be kidding, the SX-6 which powers the Earth Simulator already disipates well over 230W, this is as much as nearly 5 Athlon XP 1900+'s (47.7W typical), SPECfp2000 results is 634 for the Athlon, to be competitive on a SPEC/Watt ratio the SX-6 would have to score almost 3,200, or more than 50% higher than the highest published result. You are correct that the Athlon would be hampered by the memory bus, as well as the 4GB per process limit. My guess is the Opteron will probably be VERY competitive a
      • The cooling requirements of an Athlon based massively powerful supercomputer would eat up the savings from using standard parts.

        Athlons dissupate less heat, and have higher operating temperatures than P4s.

        Besides that, it doesn't take much to dissipate "extra" heat... Once you've got the fans that suck the heat outdoors, it doesn't really matter how hot that heated output air is. Of course, you wouldn't know much about that if you've only ever operated a desktop computer where the heat output is recircu

  • Their computer had Tallow use a PsyBeam attack, which is totally whacked.
  • Specialization (Score:4, Interesting)

    by bersl2 ( 689221 ) on Wednesday August 13, 2003 @05:23PM (#6690149) Journal
    If you're going to have a supercomputer do one thing, of course specialize it. An Earth simulation surely has a set number of formulae whose calculations are to be optimized as much as possible, even to the hardware level.

    But if you want a versitile, general-purpose supercomputer, why not go with the clustering solution?
    • Re:Specialization (Score:3, Informative)

      by jstott ( 212041 )

      But if you want a versitile, general-purpose supercomputer, why not go with the clustering solution?

      Because some problems don't work on clusters--things like large-scale molecular dynamcis simulations with long-range spatial interactions.

      Problems that require the nodes to share massive amounts of data between nodes (gigabytes per second and up--these problems often have N^2 behaviors) don't do so well on a cluster since they tend to saturate the network. A shared-memory system, like a supercomputer,

  • by Faust7 ( 314817 ) on Wednesday August 13, 2003 @05:23PM (#6690150) Homepage
    The two studies resulted, in part, from NEC Corp.'s May 2002 announcement of the Earth Simulator, a custom-built supercomputer that delivers 35.8 teraflops. That system packed five times the performance of the fastest U.S. supercomputer at that time...

    "The Earth Simulator created a tremendous amount of interest in high-performance computing and was a sign the U.S. may have been slipping behind what others were doing," said Jack Dongarra...

    Graham said researchers should not overreact to NEC Corp.'s Earth Simulator that blindsided many in the high-performance computing community eighteen months ago by delivering a custom-built system five to seven times more powerful than the more off-the-shelf clusters developed in the U.S.


    I don't mean to draw a crude analogy here, but I really can't help but read this and be reminded of the space race.

    It took Sputnik to kickstart our spacemindedness; I for one consider it sad that a "tremendous amount of interest" -- and the funding that comes with it -- in high-performance computing seems only to have arisen/regenerated with the influence of competitive international politics. Are we really so hardly advanced that our respective national egos are still the driving force behind enthusiasm, financial or otherwise, in certain areas of science?
    • No, I still believe that many people believe in the advancement of science purely for the sake of the cause. But these people are asking for government contracts, and the government won't fork out the big bucks unless there's a good reason. And, unfortunately, shelling out big bucks for stuff the comman man doesn't care about isn't good politics.
    • *What* and idealist! Buddy, ever wonder why 'competition is good' for our economy? Hm? Think it would be any different on a national/global level?

      National pride has done a lot to further progress.
    • by Pharmboy ( 216950 ) on Wednesday August 13, 2003 @05:57PM (#6690363) Journal
      It took Sputnik to kickstart our spacemindedness; I for one consider it sad that a "tremendous amount of interest" -- and the funding that comes with it -- in high-performance computing seems only to have arisen/regenerated with the influence of competitive international politics. Are we really so hardly advanced that our respective national egos are still the driving force behind enthusiasm, financial or otherwise, in certain areas of science?

      I don't really see that as bad. Yes, it may look like pure ego, but the space race gave us so much that filtered into the commercial/private sector. From advanced computers to Velcro(tm). From my perspective, being the most advanced nation in as many areas as possible is a good defense, both economically and in a homeland security sense.

      Frankly, I don't want the fastest computer chips on the desktop to be designed by a company in another country (even if Intel makes them outside of the US) and I would rather that the cutting edge, be cut here, in my native country. I am sure other people in other countries feel the same, that pushed all of us to new heights. In the end, the technologies are shared anyway. Most anyone in the world can buy Intel chips, for example.

      If no one cared who could race a bicycle the fastest, Lance Armstrong would be just some guy who had cancer. Instead, our desire to compete and excell and outdo our neighbors has benefited EVERYONE a great deal. It can bring out the bad side from time to time, but the benefits far outweigh the costs. This urge to compete and win is not unique to America by any means, it is part of being human: man the animal.

      I say bring on the computer chip wars: Lets all compete, Japanese, Americans, Europeans, Russians, come one come all. In the end, we will all benefit, no matter who has the bragging rights for a day.
      • That's what I mean (Score:3, Insightful)

        by Faust7 ( 314817 )
        Frankly, I don't want the fastest computer chips on the desktop to be designed by a company in another country (even if Intel makes them outside of the US) and I would rather that the cutting edge, be cut here, in my native country.

        Good lord, why? Is it just national/istic pride? I see that as something to be outgrown with respect to driving, receiving, and appreciating scientific discoveries and technological advancements. Honestly, if Japan were to come out with, say, the first mass-produced DNA comp
          • Good lord, why? Is it just national/istic pride? I see that as something to be outgrown with respect to driving, receiving, and appreciating scientific discoveries and technological advancements.


          Speaking as one who has played Civilisation until the late hours of the morning, I can confidently say that the country with the most advanced technology, wins.
          • by Pharmboy ( 216950 )
            Speaking as one who has played Civilisation until the late hours of the morning, I can confidently say that the country with the most advanced technology, wins.

            That term makes a lot of people uncomfortable: win.

            People assume that when you have winners, you must have losers. While this is true in Civilization, it need not be true in life. It is true that when America innovates, it may benefit more, but everyone else that uses the product can benefit as well.

            America put more money into developing the In
            • Winning (Score:1, Troll)

              by Steeltoe ( 98226 )
              People see themselves as "winning", often when they trample on others. This is because of a mis-identification. People identify with THEIR OWN community, nationality, religion and other personal bias. Instead, if you identify with yourself being a human- or spiritual being, you will see that there are only other human beings around you. Not muslims, not christians, not japanese, not even lawyers.

              America and UK is not really very secure. It doesn't help to have the best defence in the world, when you're act

      • compete and excell and outdo our neighbors has benefited EVERYONE a great deal.

        Well, not everyone.

        There is a disproportionate number of underprivileged teenagers who believe they have a chance to play professional basketball and to earn big money. The large numbers of also-rans will have to make last-minute career plans to take into account lack of formal education: the only logical lucrative careers involve selling illegal substances.

        There's a fine line between healthy competition and unhealthy compet

    • only to have arisen/regenerated with the influence of competitive international politics. Are we really so hardly advanced that our respective national egos are still the driving force behind enthusiasm, financial or otherwise, in certain areas of science?

      Certain areas of science? Our egos, national or otherwise, are the driving force behind pretty much everything. Make a baby, jump on a grenade, write a kernel patch... Ego, pal. Get over it.

      Besides, there is nothing a noble as ego driving this. Sand
    • Are we really so hardly advanced that our respective national egos are still the driving force behind enthusiasm, financial or otherwise, in certain areas of science?

      I really don't think it is ego at all... I think it's a matter of seeing something done better than you could do it, and then finally realizing how far behind you actually were...

      People are always complacent until there is some competition... If company X has the most reliabile software, then everything is great. When company Y suddenly co

  • Relative speed (Score:2, Insightful)

    by silmarildur ( 671071 )
    Is there a way to really compare the speed of a supercomputer and commodity hardware? If anyone could give either a quick explanation or a link to the relationships between bogomips teraflops MHz and the whole lot I would be very much appreciative.
  • by I'm a racist. ( 631537 ) on Wednesday August 13, 2003 @05:28PM (#6690183) Homepage Journal
    Specialized hardware (almost) always outperforms commodity stuff.

    I use custom designed amplifiers because they work better for my application. I could buy off-the-shelf stuff (~$500~$10,000 range), but that won't be exactly what I want. I use custom software too... know why? Because it's designed specifically for the job. That same software shouldn't really be used for other fields of research, neither should my amplifiers. The thing about this stuff is that it takes a lot of time to maintain (plus initial development). That means grad students, postdocs, and technicians who may spend over 90% of their time just keeping systems in working order and/or adding features. The benefits of customized hardware/software, in this instance, is worth the headaches associated with it.

    All of my optics is commodity stuff (some is rare/exotic, but it's still basically black-box purchasing). I don't have the facilities to make coated optics, nor do I need anything that specialized, so... I just buy it.

    When I was in telecom, we used Oracle and Solaris and Apache. It worked, and the cost of developing the same functionality in-house was ridiculously high (plus we'd never get to designing our products that sit on top of it).

    Eventually, it always comes down to a comparison between the cost (man hours, equipment, etc) of custom building and of integrating stuff from OEMs.

    So, the question our labs need to answer is, does clustered COTS hardware get the job done? Supplementary to that, is it cost-effective to buy/design it in light of the previous answer?

    In any field where you are pushing the limits of technology, you have to make such trade-offs. Personally, I don't care who has the absolute fastest supercomputer (measured in flops, factoring-time, whatever)... what really counts is, who does the best research with the supercomputers.
  • So you want more specialized supercomputer, eh? I can build one right now!

    It's specialty is executing x86 programs. I can also make some that specialize in PowerPC programs.
  • Specialization (Score:5, Insightful)

    by bytesmythe ( 58644 ) <bytesmythe@@@gmail...com> on Wednesday August 13, 2003 @05:31PM (#6690207)
    Specialized systems are almost always going to outperform generalized systems when you're dealing with similar levels of technology (for instance, specialized abacasuses vs. a generalized Cray T3E).

    The great thing about generalized systems is you can use them to explore new areas, then design a specialized system to take advantage of specific optimizations the generalized one can't support.

    I'm glad for the report suggesting a "balanced approach". I can't imagine forsaking one type of system for the other, as each has its place. (Uhoh... generalized systems have a "place"? Does that mean they're specialized at being generalized? Oh, the irony! ;))
  • by torpor ( 458 )
    Hello.

    Custom Software running on Custom Hardware [access-music.de] vs. Custom Software running on Commodity Hardware. [nativeinstruments.de]

    Duh ...
  • The reports come on the heels of recent congressional testimony warning that the United States is falling behind in supercomputing.

    Since when is the US falling behind in supercomputing. I remember reading a list of top supercomputers in the world, and the US had 14 of the top 20. Isn't it quantity in this case, not quality? Specialization is just the case here, so what if we don't have the absolute fastest.
  • Invest in Cray (Score:2, Interesting)

    by Teahouse ( 267087 )
    Cray is back and getting back into the government contract game. Suprisingly, they are doing it just as the DOD is realizing that they need specialized hardware like they used to when Cray was one of their best suppliers. Look for little ol Cray to be back in the black real quick, and pick up a few shares now.

  • One would think that, given the kind of applications for which parallel super computers are used, that it would be (in some cases) efficient to do the computation in arrays of FPGAs loaded with application-specific designs.

    This is kind of a compromise between each node being a slow but adaptable general purpose CPU (with maximum flexibility) and a super fast (but inflexible) ASIC.

    Perhaps the big barrier to this would be making the math and physics geeks write verilog, or perhaps writing a really shit-ho

  • Great argument for people with their head in the trough. We need funding for specialized, proprietary hardware so we don't fall behind the Japanese. Intel/AMD CPU's aren't good enough. SUN, can't compete price/performance with Linux/Intel. NASA lost a couple of Mars probes (expensive, custom hardware), while a cheap Mars Rover [nasa.gov] mission makes it there with OTS parts. Of course, if you are aiming for taxpayer funding, your cost/performance priorities are the same as if you are spending your own money.
    • Everything is a nail if you only have a hammer.
      Not all problems are best solved by COTS clusters. Yes they are very good for some problems but not all. Some problems are best solved with vector based systems like Cray makes. Just why do you think that a HUGE pile of PCs networked together are the end all of Supercomputing. Just as you do not want your airliner/car/pacemaker to run off a p4 and windows you might just want that supercomputer modeling the depition of the Ozone layer to be a vector system.
  • by ikewillis ( 586793 ) on Wednesday August 13, 2003 @05:41PM (#6690269) Homepage
    As an employee of an atmospheric modelling group [colostate.edu] I am very surprised to hear this. Our atmospheric modelling program, the Regional Atmospheric Modelling System [atmet.com], is not I/O bound in the slightest and is instead very much CPU bound. We currently use 100bT for the interconnect on our cluster, and have tried moving to Gigabit with negligable performance gains.

    The main area in which we saw benefit was switching from the Portland Group Fortran Compiler [pgroup.com] to the Intel Fortran Compiler [intel.com], which cut the timestep (simulation time/real time) nearly in half.

    Every cluster in the department is assembled from commodity x86 components. Groups here have been moving from proprietary Unix architectures to Linux/x86 systems and clusters. Our group started out on RS/6000s, then moved to SPARC, and is now moving to x86. In terms of price/performance there really is no comparison.

    As for TCO, the lifetimes of clusters here are relatively short, one or two years at the most. Thus a high initial outlay cannot be set by lower cost of operation.

    • I also work in the geophysical modeling arena and you will find that one of the biggest differences in using a purpose build S/C versus a lot of OTS equipment is memory speed. It is typical to reach only 10% of peek efficiency when running an application, even with nice structured problems like you are running. While you claim that you are CPU bound, you really are not. For example, if you run on a slower CPU but with a better memory subsystem or a larger cache (example SGI vs intel/linux) you will find
    • by FullyIonized ( 566537 ) on Wednesday August 13, 2003 @08:11PM (#6691213)
      And I'm surprised to hear that you are surprised since fluid modeling is one of the applications that do very well with the vector processors that the Earth Simulator uses. I attended a lecture by Dr. Sato, head of the Earth Simulator, who stated that the best application usage was 65% peak usage (the theoretical peak which assumes that the processor always has data to crunch and no branches) and the average was 30% of theoretical peak. By contrast, typical fluid-like codes on current U.S. machines typically get less than 10% of peak usage if they have any type of implicitness (currently the magnetohydrodynamics code I use gives about 6% usage on an IBM SP that is #5 on the Top 500 supercomputer list).

      I get tired of seeing figures that compare peak flop rates and then don't mention that actually code usage isn't keeping up at all. The Japanese (and Europeans who are allowed to buy NEC machines) are absolutely spanking the US when it comes to fluid codes (for climate modeling for example) and it is largely because they are using vector machines with their old highly optimized Fortran (or High Performance Fortran) codes. The MPP revolution in the U.S. has been manna for the CompSci community, but has set the computational physics community back by 10 years (except for those lucky bastards with embarrassingly parallel jobs).

      I would give up an unnecessary body part for an Earth Simulator.

    • I know that 1000bT has 10x the bandwidth, but does it have more than negligible latency gains? That might be a factor.

      I've seen complaints of Ethernet being latency.
      • ethernet has about 100 us (microseconds) latency, regardless if it's 100 or 1000 Mbit. Specialized cluster interconnects, like myrinet, scali and quadrics, have about 3-7 us. Of course, those interconnects cost a fortune too.
  • with the "GRAPE" [arizona.edu] computers. (More links) [google.com]. I expect there are examples going back to the dawn of the computer age.
  • yes (Score:1, Funny)

    by Anonymous Coward
    Because all your general super-computing needs will be filled by the G5.
    • that's what everyone says every generation on the Apple roadmap

      and i'm totally disapointed each and every time
  • It does matter (Score:1, Offtopic)

    by Cyno ( 85911 )
    how many supercomputer we build. We'll still be wasting cycles processing the first few nanoseconds of nuclear explotions on them or trying to find more oil or money or WMDs. The last thing we care about is the environment we all have to live in.

    I wonder how rich we'll be when it finally hits us that irrepairable damage has been caused to our environment? I hope we're really rich so we can afford to buy a new Earth. Cuz we might need one by then.
    • Re:It does matter (Score:1, Informative)

      by Anonymous Coward
      Perhaps you haven't looked at what the Earth Simulator is actually being used for? It is doing plenty of environmental research. The humourous thing is that the US wishes to develop faster systems with more capacity than the earth simulator only to simulate weapons. (see hpc.mil) The Japanese have the right idea, it will take big science and big computers to solve our environmental problems.
      • The more humorous thing is that

        1) Many environmental processes are chaotic -- that is, they strongly depend on minor variations in their parameters and

        2) We have only a very rough idea of the parameters -- huge new CO2 sinks and sources are discovered often, for instance.

        So it doesn't matter how large a supercomputer you build, you're still going to get garbage out. But with a fast supercomputer, it'll be detailed and precise garbage...
      • Re:It does matter (Score:3, Interesting)

        by afidel ( 530433 )
        Would you rather they simulate weapons or resume detonation testing of new designs?? The fact is the US has a VERY large and ever aging supply of weapons, most of the cycle time so far from the ASCI projects has gone towards stewardship of the existing crop of weapons, making sure that the stockpiles are safe and also that they will be effective(if god forbid they should be needed). Also, reduced consumption is the only thing that will reduce our environmental "problems". Personally I think anyone who think
  • Why oh why? (Score:3, Interesting)

    by tomstdenis ( 446163 ) <tomstdenis&gmail,com> on Wednesday August 13, 2003 @05:49PM (#6690316) Homepage
    Definitely a really huge super-computer would be neat to have but honestly are they putting the ones we already have to good use?

    From what I've heard [anecdotally] computers like the earth simulator go vastly under utilized for the most part.

    So given that most nations [including the US] have budget problems specially concerning education couldn't people think of better uses for money?

    And before anyone throws a "it's the technology of it" argument my way, I'd like to add that if anything I'd rather have the money spent on researching how to make high performance low power processors [and memory/etc] instead. E.g. an Athlon XP 2Ghz that runs at 15W would be wicked more impressive than a 50,000 processor super computer that runs a highly efficient idle loop 99% of the time.

    Tom
    • Re:Why oh why? (Score:5, Informative)

      by mfago ( 514801 ) on Wednesday August 13, 2003 @06:11PM (#6690450)
      computers like the earth simulator go vastly under utilized for the most part

      From first-hand experience, such computers are running jobs almost 24x7. Due to job scheduling details there are times when some of the machine is idle, but this is still a small percentage. These machines are used for a vast array of applications, not just the advertized ones.

      Now the utilization as a percentage of peak theoretical is another matter. For some algorithms, 20% of peak performance (IIRC) is considered good (ie. a particular code might only get 2 TFlops on a machine rated for 10).
    • Re:Why oh why? (Score:2, Interesting)

      by gsabin ( 657664 )
      From what I've heard [anecdotally] computers like the earth simulator go vastly under utilized for the most part.

      From my experience that is mostly untrue, yet widely publicized. Yes, if you look at utilization as the (used-proc*sec)/(totaltime*numprocs) the number can be relatively low (~60-70%). However, that includes system time, rebooting the machine, weekends, holidays, etc. Further, when it comes down to it the researchers need to have a reasonable turnaround time during the day for their develo
    • Someone needs to convince those Japanese scientists to run this [distributed.net] on their super computer.
  • In his book, After the Internet: Alien Intelligence, James Martin predicted the future will be dominated by highly specialized "machines" tailored to perform a limited set of tasks. His vision of AI is quite different from what we call it today.
  • by GeoGreg ( 631708 ) on Wednesday August 13, 2003 @06:22PM (#6690509)
    If you'd like to see what these people are up to for yourself, here is a link [jamstec.go.jp] to their website. Lots of performance data, lists of projects, etc.
  • superconducting supercomputers were all the rage and then havent heard anything new.

    Could someone with knowledge on supercomputers tell me the story here. thanks.

    superconductor computer petaflop [superconductorweek.com]

  • by GeoGreg ( 631708 ) on Wednesday August 13, 2003 @06:35PM (#6690588)

    There seems to be an impression in some comments that this machine has some sort of special design that's only applicable to climate modeling problems. In fact, this is a vector-based supercomputer, applicable to any problem where you need to perform vector operations (i.e., operating on large arrays of numbers in parallel).

    Certain numerical operations can be performed blindingly fast on these types of machines. Each arithmetic processor on this machine has 72 vector registers, each of which can hold 256 elements. Then you can perform operations on all 256 elements of 1 or more registers simultaneously! If the algorithm can keep the vector units fed, they will scream.

    Since keeping data flowing to the processors is critical to speed, the high-speed interconnects (~12GB/s) are a must for any problem that is not completely localized. It's all about matching the problem to the hardware. There may well be problems for which a commodity cluster just can't get the job done like this can. Remember that each node of a cluster consumes power, produces heat, and takes up space. The raw cost of hardware is not the only consideration.

  • by heli0 ( 659560 )
    Is there any speculation out there about what type of supercomputers the NSA has? Their budget is off the record (estimated $6Billion+/yr) and surely they have interest in cracking all of those 4096-bit encrypted messages sent between the US and Saudi Arabia et al.
  • Nuts to that (Score:3, Informative)

    by DeathPenguin ( 449875 ) * on Wednesday August 13, 2003 @07:19PM (#6690895)
    Earth Simulator is impressive in its own reguard, but in no way is the majority of clustering apps going toward these 'specialized' systems. Governments, research labs, etc. want powerful computers that are dirt cheap. Los Alamos's ASCI Q (Installment 1, the Alpha servers) cost over $100,000,000 to build, while their Pink cluster cost about $6,000,000 in hardware. On paper, Pink and ASCI Q are both around 10 teraflops. ASCI Q runs Quadrics on 64-bit 66MHz PCI, Pink is getting an ugprade to Myrinet Lanai 10 on PCI-X (From Lanai 9 on 64/66PCI). Not only that, but Pink runs the open-source, 100% GPL'd Clustermatic software and can be booted in a matter of seconds rather than hours like ASCI Q.

    The fact is, systems like ASCI Q and the Earth Simulator just aren't practical. They may look great on paper, but there's not much that they can do that can't be done on x86. Given the choice between paying over a hundred million for a proprietary cluster that might not even be all that reliable (*cough*Q*cough*) and requires expensive software and maintenance contracts, we see companies like Linux Networx offering high-power clusters on common hardware and free software that are a fraction as expensive.

    As far as reliability goes, don't get suckered into thinking that proprietary and expensive mean quality. Q's failure rate [sandia.gov] is almost as high as my old Windows '98 machine hahaha. With the exception of a few missing chillers, Pink [lanl.gov] seems relatively healthy with only a few minor failures.

    If CRAY and NEC want to get into a pissing contest in specs, that's fine. If they offer something that Intel can't, more power to them. Otherwise, the five organizations in the world that own their systems can be proud that they have the most powerful computer on paper for a year or two before someone builds a cheaper x86 cluster that matches or out-performs them.
    • That is the whole point.

      I have the feeling the DOE (nuclear weapon simulation etc) simulation program is not going anywhere near as well as it was sold.

      Massive commodity clusters boast big numbers but they do not boast great useful throughput of USEFUL RESULTS. (also with massive clusters
      you have to be able to deal with inevitable hardware failures).

      You have a certain fluid problem---there is a certain speed of sound, and a certain physical geometry. What you want to do is to be able to simulate the
    • Re:Nuts to that (Score:3, Insightful)

      by afidel ( 530433 )
      Actually, the customized vector machines will usually achieve a MUCH higher %age of their theoretical peak computational capacity on certain "hard" problems then a cluster of comodity machines. The nearness of the nodes dictates that, if the average near neighbor latency is an order of magnitude faster then problems that are communications bound are going to be able to achieve much higher throughput on a tightly coupled cluster of faster, more specialized nodes then they would be able to on a larger more lo
    • Los Alamos's ASCI Q (Installment 1, the Alpha servers) cost over $100,000,000 to build, while their Pink cluster cost about $6,000,000 in hardware. On paper, Pink and ASCI Q are both around 10 teraflops.

      Big Whoop. My $200 desktop computer is faster than the super-computers of just a decade ago... What good does that do exactly?

      My point is that you can't compare two systems unless they were installed in the same time-frame.

      Also, saying that a cluster of comodity hardware is better than a supercomputer

      • ASCI Q and Pink where both in fact installed in the same time frame. Actually, both of these can be classified as clusters. I guess the point was that ASCI Q is a cluster of DEC, uh Compaq, sorry, HP Alpha machines (=expensive) while Pink is a x86 cluster running Linux.
  • "The new Cray supercomputers can execute an infinite loop in under a second"
    Can't remember where I heard that though.

    If these big supercomputers are so underutilized, why not run some public distributed projects on them in their spare time. (SETI, distributed.net, folding@home etc)

  • I've wondered, what would a mass produced box capable of running seti@home cost? something I could plug into my router, and the power, send it my user name, and let it burn through seti packets. hmmm
  • by depeche ( 109781 ) on Wednesday August 13, 2003 @08:55PM (#6691521) Homepage
    There is also a direct trade-off between more general purpose systems and systems custom tailored to a task. Good examples are Deep Blue [ibm.com] and Blue Gene [ibm.com]. Both of these systems are designed with a particular task in mind (i.e. chess and protein folding) and therefor are able to leverage knowledge about the problem space to constrain the kind of hardware, the particular low-level instructions and the information flow within the system while achieving signifigantly greater performance on a small class of problems. I work with clusters that are used in scientific communities that have various researchers working on various problems. In these cases, the questions are about basic applicability of a particular problem to a particular architecture. For example a cluster with high-speed interconnects made of good COTS hardware will allow a user with a very granular problem to effectively use the cluster and it will also allow a user who needs the high speed interconnect because the problem space demands a high degree of internal communication. But the first researcher might also be able to make use of a grid of (for instance) many more computers with a total lower cost because (s)he doesn't need the high speed interconnect. The Earth Simulator gains a lot of performance (on a class of problems) because of the underlying vector processor architecture. Given the right internal bus it is conceivable that adding vector processor daughter boards to the next generation of COTS clusters could achieve similar results--but, of course, only for problem spaces that make efficient use of such processors and aren't bottlenecked by the communication requirements.

    Real answers are always more complicated. For example: the equations needed for nuclear simulation will probably require dedicated hardware (as the need for protein folding has lead to Blue Gene) to achieve the results that the Pentagon needs. But for many super computing tasks, the flexibility of COTS clusters will still be compelling, especially for areas where the algorithms are not yet fully developed (e.g. brain simulation). An interesting keynote at OLS 2003 argued that (some of) the problems are not going to be the local computing power but the need to move large quantities of data between research labs across the world and combine computational systems using the 'grid.' [globus.org] (For a down home examples of problems that have been successfully tackled through course granular distribution just look at SETI@Home [berkeley.edu] and Distributed.Net. [distributed.net] So its not just the flops anymore...
  • Call this flamebait or troll, but we don't need no stinkin' supercomputers!

    The primary uses of supercomputers that I've read about are to perform simulations of real-world phenomena. It might be possible to contruct circuitry that makes a computer more efficient at a series of specialized computing tasks. It's arguably more efficient to not use supercomputers.

    (DANGER - intentional lack of sentitivity below)

    Examples:

    1. Genomic research - inject experimental drugs into real-live humans. If a higher p

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (1) Gee, I wish we hadn't backed down on 'noalias'.

Working...