Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
HP IBM Hardware

Top 500 Supercomputer List Released 167

sundling writes "The heavily anticipated Top 500 Supercomputer list has been released. There is a Sevenfold increase in AMD Opteron processors on the list. Two sections of an IBM prototype took spots in the top 10 and the famous Apple cluster didn't make the list, because it was out of service for hardware upgrades. When complete, the new IBM cluster is sure to take the top spot from the Earth Simulator."
This discussion has been archived. No new comments can be posted.

Top 500 Supercomputer List Released

Comments Filter:
  • by Moblaster ( 521614 ) on Monday June 21, 2004 @07:20AM (#9483006)
    IBM's new supercomputer will calculate "42" before the Japanese. America can feel good again.
    • I have a theory that Deep Thought had a word size of 42 bits..... thus the answer to the great question, the meaning of everything is 42, just like humans anthropomorphize 'god' as an elderly grandparent with long white beard, etc.
    • America can feel good again.

      Should we really feel so good about this list? Should we really feel so good that such a significant portion of American computational resources is for warfare and the design weapons of mass destruction?

      Look at the other machines at the top of the list. Where do other countries place their computational resources?

      Eric Salathe

  • by dkone ( 457398 ) on Monday June 21, 2004 @07:23AM (#9483016)
    they are not running their site on one of the top 500.
  • Oh dear (Score:4, Funny)

    by Killjoy_NL ( 719667 ) <slashdot@@@remco...palli...nl> on Monday June 21, 2004 @07:24AM (#9483018)
    Does nobody see what is about to happen?
    Those computers will read that list and know which computers to connect to, to take over the world!!

    Doesn't anyone read comics anymore ??
    May $DEITY have mercy on us all.
  • by garethwi ( 118563 ) on Monday June 21, 2004 @07:27AM (#9483036) Homepage
    ...oh forget it.

  • IBM's Blue Gene (Score:5, Interesting)

    by zal ( 553 ) on Monday June 21, 2004 @07:27AM (#9483038)
    Last Thursday there was a little HPC Event by IBM at my University. And apart from the usual Balde Center for Scale Out Computing PR Blurb there also was a 1 Hour Presentation by one of IBM's Senior Strategy Analysts. What i found most interesting how they basically use embedded Processors for Blue Gene due to Cooling and Power Consumption Issues. He talked about Thermal Design, from the Basic Components right to where you compute Heat Dissipation for the whole room so you know where to put the very heat sensitive myrinet/infiniband components.
    • Re:IBM's Blue Gene (Score:5, Informative)

      by flaming-opus ( 8186 ) on Monday June 21, 2004 @09:03AM (#9483824)
      Did they mention why myrinet and infiniband are heat sensitive? I've used myrinet before, and did not encounter any problems with it, though I was not using 1U dual-CPU systems. (just a bad idea in general) A myrinet card includes a pretty high-clocked ASIC that runs warm for a network card, but is nothing compared to most graphics cards these days.

      Blue Gene is an amazingly simple, and crafty design, with efficiency at its heart. I'm not sure that it will be as successful as the IBM marketing machine claims it will, but it's exciting none-the-less.

      The trend in CPUs, over the last ten years or so, has been to maximally fill long, wide super-scalar pipelines. The Power4 has half a dozen execution units and a 15 stage pipeline, running at 1.7 ghz. To keep that full, one has to have exceptional branch prediction, huge caches, and superb compilers, and tons of memory bandwidth.

      The Blue Gene approach is to have fewer, shallower, lower-clocked pipelines, but lots of CPUs. Their peak speed is a quarter of the top CPU designs, but their real speed is half of the big guns. Since they are using today's chip technology to implement yesterday's chip designs, they use little power, and are very inexpensive. Since IBM has cleverly integrated all the communications networks and memory controllers, you only need three components in the system: CPUs, RAM chips, and passive circuit boards - plastic and copper. (Yeah, I'm sure there is other stuff, but not much)

      The design is not revolutionary, it's a fairly intuitive evolution of the Paragon, or the T3E. This sort of system may not be perfect for every task, but will excell at the sorts of tasks that already work well on big clusters. That, and it will likely be very cost effective.
    • to clear up any confusion: Blue Gene/L doesn't use Myrinet or Infiniband; it's connected internally by separate torus, tree, and global interrupt networks, with gigabit Ethernet to the outside world.
  • by Noryungi ( 70322 ) on Monday June 21, 2004 @07:28AM (#9483046) Homepage Journal
    Is that Disney is #57 in the top500, while Weta has the #77 and #80 spots... impressive showing by the entertainment companies.

    On the other hand, PDI (Pacific Data Images -- Shrek), Pixar and ILM do not appear in the list, which is also very interesting.
    • ... I guess that's because rendering is inherently scaleable, i.e. there is no advantage in building one big, bad ubermachine. Far simpler to parcel out frames between any and odd number of renderfarms, many of which you may not even own.

      It is a "make or buy" situation. Given an efficient payment system, I do not see why they should not render using some program similar to Folding@home.
      • Because the movie has to be done on time, and it is difficult to guarentee that enough CPU time wil be available from a folding@home-type distributed network. Users may turn their PCs off, get bored with it and decide to give/sell their unused cycles to a different project, etc.
        • Actually the biggest reason is that the scene data is gigabytes in size and the machines need to be maxed on the RAM they can hold. My friend had a single texture on his senior digital film project that was larger than most systems ram (570MB IIRC).
        • Additionally, if you're talking about a publically available distributed network, it seems a little unfair that Pixar and ILM are making all this profit when I could be spending my computer cycles more wisely on a charity (or chartiable) project like curing cancer, decoding the Genome, and so forth.
    • by flaming-opus ( 8186 ) on Monday June 21, 2004 @08:35AM (#9483512)
      I did some contract work at ILM several years ago, and know why this is. They don't use one big machine, but rather a bunch of medium sized clusters. This is for a very good reason. Weta has, thus far, worked on one big movie at a time, where all of their resources are dedicated to a single data set. ILM is constantly working on half a dozen moveis all at once.

      In essence, they lease some amount of resources to a particular movie studio for some number of months. At the time they were doing this with row upon row of 32 processor SGIs, but they are probably using something else these days. Thus no spot on the top500 list. However, since they are in the business of making movies, I bet they don't really care.
  • That's the old trick about multiplying by zero, right?
  • by -noefordeg- ( 697342 ) on Monday June 21, 2004 @07:29AM (#9483051)
    So how do they measure?

    The link didn't work right now so I'll make a guess...

    Test must at least include Q3, UT-2004 and 3DMark03, but since these are pretty powerful computers I guess they also use some sort of advanced custom built MineSweeper with like 10.000x10.000 grid playing field or something wild crazy stuff like that.
    Maybe 400+ pages Word documents?
    Final test is probably Halo for pc. Any fps score above 20 will result in a spot > 100 on the list.
  • Google cluster? (Score:3, Interesting)

    by millwall ( 622730 ) on Monday June 21, 2004 @07:30AM (#9483054)
    Google cluter not in here? What do you reckon the performance/size of such cluster could be?
    • Re:Google cluster? (Score:4, Interesting)

      by nutshell42 ( 557890 ) on Monday June 21, 2004 @07:48AM (#9483140) Journal
      I'm just guessing here so sue me

      Google has an impressive cluster but it's optimized for storage and parallel page access.

      I don't think that you could use google's cluster to compute 42 without distributing the work by hand over the different servers because it wasn't built to do calculations but to answer page requests distributed over the different units and to be able to access the most complete mirror of today's web

    • Re:Google cluster? (Score:5, Interesting)

      by pete-wilko ( 628329 ) on Monday June 21, 2004 @07:56AM (#9483181)
      Having heard a lecture from Jack Dongarra about HPC and the top 500, he mentioned that google declines to participate, as they wern't inclined to reveal their setup, or run the benchmarks for the top 500 which would mean putting their machines to other uses for the duration of the benchmark. If I rememeber though I think he said that at a guess if they did participate (based on the various 'guesstimates' out there of google's setup) that they'd easily make the top 10 if not pushing number 1. This is also leaving aside arguments over the role that the system is trying to fulfill (i.e. easily distributed work, like a search engine, vs work that can't be broken up easily like an earth simulator).
      • i find that hard to believe.

        at that node count interconnect speed and latency becomes crucial factor and from what i know about google they have 3-4 data centers and servers connected via commodity interconnect (1000/100).

        from my understanding of linpack their setup just isn't suitable for such workload.
    • Many other clusters/grids computers can't be there. Defense clusters are amongst the most powerfull, but people are really reluctant to tell the other what their computing power is made of as it comes to defense...

      In research, there are huge grids that cannot be benchmarked, because those are co-financed, and facing the difficulties to determine who would get the credit for the whole grid, the best is to avoid it.

      And the final reason could be that now that the manufacturers are leading a war on this lis

    • I found this on the Folding at Home site. It seems that they are running FAH on spare time and when you have a look at the statistics of team 446 [stanford.edu], you see that they are the first team, that they had 23721 CPUs active during the last 50 days...

      that tells more about "the beast". So far, I just can tell that it is made of linux clusters, containing about 12500 nodes, because in case of clusters you are facing bi processors systems 98% of the time.

      Here is the track, if someone wants to hunt the beast.

    • Google cluter not in here? What do you reckon the performance/size of such cluster could be?
      The Google cluster isn't there, because the Google cluster isn't a supercomputer. It's a bunch of distributed machines performing related tasks asyncronously, not a clustered set of machines dedicated to performing a single (howsoever complex) task with varying degrees of synchronism.
  • Comment removed (Score:4, Informative)

    by account_deleted ( 4530225 ) on Monday June 21, 2004 @07:37AM (#9483082)
    Comment removed based on user account deletion
  • by Fourier ( 60719 ) on Monday June 21, 2004 @07:38AM (#9483087) Journal
    Heavily anticipated by whom? I understand that the Superbowl is heavily anticipated. The upcoming US election is heavily anticipated. To a lesser degree, today's SpaceShipOne launch is heavily anticipated. But honestly, are there any people gathering around the water cooler exchanging rumors of who has the edge in cluster network latency this year? (Supercomputer administrators don't count.)

    Somebody needs a little perspective...
    • by TimeZone ( 658837 ) on Monday June 21, 2004 @08:23AM (#9483390)
      You probably don't understand that a lot of people are employed in the area. I worked on technology that went into the #6 machine, and yeah, the top500 lists mean a lot to us. I've been waiting a long time for something I worked on to end up in the top 10.

      TZ

      • Well, I guess it's all relative. You can label the anticipation heavy as much as you want for (1) a sufficiently small subset of the population or (2) sufficiently small values of "heavy". ;-)

        Congratulations on getting your piece of the top ten.
      • Cool. I worked on it too. [ibm.com] I designed some of the most kick-a$$ cables especially for it:

        040-681-3925, Serial Port Converter Cable, 9-Pin to 25-Pin. $22.50
        Converts signals both ways, handles voltages up to 100 volts! Necessary for syncing your Palm.

        7040-681-3125, Serial to Serial Port Cable for Rack/Rack. $72.00
        Flexible, with two connectors, one at each end. No cheap $50 serial cables for you -- this computer demands the best! Doubles as a tie-down strap when transporting your p690.
  • WWDC Power (Score:2, Interesting)

    by artlu ( 265391 )
    Maybe we can get everyone at the WWDC to use XGrid [apple.com] and break into the #1 slot for a brief second. Damn, i want Apple to take that spot.

    GroupShares Inc [groupshares.com] - A Free Online Stock Trading Community
    • Re:WWDC Power (Score:3, Interesting)

      by Talez ( 468021 )
      Do it!

      Assuming an average 1GHz per person, 4 FLOPS per cycle (assuming you could get Altivec working flat strap), 70,000 people turn up that could work out to be... ummm.... 280 teraflops.

      You'd have yourself a Universe Simulator with that amount of power!

    • Re:WWDC Power (Score:2, Informative)

      by deadline ( 14171 )
      Better read this [clusterworld.com] first there cowboy. It is not as easy as you think.
  • by Sunspire ( 784352 ) on Monday June 21, 2004 @07:42AM (#9483108)
    At least 5 of the top 10 systems are running Linux, starting at number two with Thunder at the Lawrence Livermore National Laboratory [intel.com]. The others are IBM BlueGene/L clusters at places #4 and #8 [linuxdevices.com], Tungsten at NCSA at #5 [uiuc.edu], MPP2 at Pacific Northwest National Laboratory at #9 [quadrics.com], and probably also the Dawning 4000A at the Shanghai Supercomputer Center as #10, though I'm not 100% sure about this last one.
    • by Anonymous Coward
      just about 1 out of 64 chips runs a full powerpc version of linux (lightly modified), that will be the node that handles input/output for that group of nodes. the others run a small custom and very stripped down kernel, i don't remember the name, i attended a presentation at my university here in italy.
      on a side note the cluster that will control bluegene (yes, to control the big beast they are planning to use a cluster of machines, for example they will use db2 to store informations about the 64 thousands
    • well, in a supercomputer OS, you really only have two choices. You can create a microkernel OS that runs on al the computation nodes, and does system calls to services nodes.

      you cluster together a bunch of monolithic kernels. At 8000 processors you aren't going to be able to use 1 monolithic kernel, so the distinction between a medium scalable OS like linux and a large scalable OS like solaris/irix is a bit of a moot point. 1000 OS images instead of 250? It's a nuisance either way.
    • As much as you like Linux, it's 'clusters' that still rule, not Linux. Linux is only used in a fair number of clusters because A) it's the most popular unix for x86 and B) they probably save on licenseing costs.

      The important distinction for supercomputers is 'cluster' versus shared memory.
  • Is there a list of most powerful clusters? If so, does /. make that list?
  • "The Thunder system, based on 4,096 Intel Corp. Itanium 2 processors, at LLNL recorded a maximum performance of 20T flops"

    I hope they're not using Linux. That's a LOT of SCO licenses...
  • My machine (Score:4, Funny)

    by swapsn ( 701280 ) on Monday June 21, 2004 @07:47AM (#9483127)

    I see my machine has not made it into the list. Ah well. Maybe next year...
  • Pitty they don't have one of the top500 running the website. It seems to be going rather slowly at the moment...
    • im sure it would be fast, but would the hpc organization be good for DB/CGI stuff, or is there more bang for your buck if you plug them in different?
      • A supercomputer would be really rather unsuitable for this kind of thing. The most important thing by far is having a really fat pipe, as opposed to a really fast computer. Besides, loosely coupled clusters with load balancing would do a much better job (for the cost) than a super computer since the (expensive) tight coupling a that super computer gives is unnecessary.

  • by YouHaveSnail ( 202852 ) on Monday June 21, 2004 @07:47AM (#9483133)
    It's worth pointint out that if you're going to consider a given supercomputer to be "AMD" or "Intel" based on where the processors come from, then Virginia Tech's cluster of Apple Xserves is an "IBM" machine.

    That's not to take anything away from Apple. Both Xserve and the G5 towers that came before them are a great design, reliable, run a great OS, yada yada yada. But the chips are IBM.
    • by Anonymous Coward
      The PowerPC G5 is the product of a long-standing partnership between Apple and IBM, two companies committed to innovation and customer-driven solutions. In 1991, they co-created a PowerPC architecture that could support both 32-bit and 64-bit instructions. Leveraging this design, Apple went on to bring 32-bit RISC processing to desktop and portable computers, while IBM focused on developing 64-bit processors for enterprise servers.The new PowerPC G5 represents a convergence of these efforts: Its design is b
      • The PowerPC G5 is the product of a long-standing partnership between Apple and IBM, two companies committed to innovation and customer-driven solutions. In 1991, they co-created a PowerPC architecture that could support both 32-bit and 64-bit instructions.
        A bit revisionist, don't you think? Motorola was an equal partner in the PowerPC alliance, and infact all Apple PPC machines used Motorola CPUs until quite recently.
  • + 65 for IBM (Score:5, Interesting)

    by freeduke ( 786783 ) on Monday June 21, 2004 @07:47AM (#9483135) Journal
    I have seen that there are 65 more IBM supercomputers in june than in october (jump from 159 to 224). I thried to figure out which computer those were, because it is an impressive gap: + 65 out of 500 in 6 month? Marketing gap?

    In October, HP was impressive, because they filled the bottom of the list with Itanium based superdome: they ranked those all on the same bench figures, that means that those computers were not benchmarked by the customers but by HP. That was a good oportunity for IBM: each time they could put one of their computers on the list, they were sure to throw an HP one out of it, so increase the gap by a factor of 2 (+1 for IBM, -1 for HP) with their main rival.

    So I am now wondering if this top500 list still means anything in term of performances and computing power, or is just a promoting tool, where manufacturers can conduct a war on market shares.

  • In other news, Bush administration officials have created 1.2 million new jobs by hiring unemployed Americans to close pop-ups windows for Lawrence Livermore National Laboratory, whose new supercomputer will be used to study nuclear bombs, the weather, and the dynamics of carpal tunnel syndrome.
  • mirror site (Score:1, Informative)

    by Anonymous Coward
    Use the Mirror [top500.org]
  • Before everyone starts congratulating AMD's success and talking about the superiority of the Opteron and Intel's imminent demise etc etc, I thought it might be worth mentioning that AMD isn't the only company improving on the list:

    A look at the hardware shows Intel Corp. making big gains on its competitors with a total of 287 machines are based on Intel chips, up from 119 this time last year.
  • by patrik ( 55312 ) <pbutler@kill[ ]ux.org ['ert' in gap]> on Monday June 21, 2004 @08:46AM (#9483621) Homepage
    1) The VT cluster will probably never beat the EarthSim. Why? Because the interconnects (fancy network connections) are so specialized on EarthSim that it will tromp any off the shelf system. Furthermore everything about the EarthSim computers are built to be clustered as they are. VT uses infiniband which is faster and lower latency than Myranet or the other common cluster interconnects, which is part of the reason why it kicks so much butt, but the systems are still pretty much off the shelf and will never be able to beat EarthSim. Of course VT does for millions upon millions less and much more cost effectively, so even if it's not #1, in many ways it is the best.
    2) Google's cluster is (probably) a much more distributed system, it would probably take a severe beating in trying to do the LinPack benchmarks that they use to rank the top500. The algorithm requires a lot of data passing, it probably doesn't excel at low latency or even high bandwidth (>16Gb/s) data passing. That's just an educated guess though, AFAIK that information is pretty well secreted. In raw processing power under one roof Google probably has it made, but since most problems (not all, read: *@home) in science and math require lots of data passing between nodes Google will probably get trounced in the top500.

    Patrik
    • The VT cluster will probably never beat the EarthSim. Why? Because the interconnects....

      The fact that the earth simulator has 130% more processors than vt's mac cluster, probably has nothing to do with it.
      • ~3.5 times speedup for ~2.3 times the processors, it's not all in the # of processors. Tell you what, if someone builds a 36TFlop machine with the interconnects that VT uses drop me an email, I'll buy you a pizza. Infiniband is great stuff but it's a somewhat more generalized networking technology, unless you do something crazy with the network topology you're going to hit bandwidth limitations, we're talking about math problems that easily require terrabytes of message passing.

        Patrik
        • Oops, forgot to log in... lets post again so you see it without browsing at score 0:


          ~3.5 times speedup for ~2.3 times the processors, it's not all in the # of processors.


          Gee, perhaps that's because the earth simulator has vector processors, which perform quite well on the linpack benchmark, given a good vectorizing fortran compiler. Not to mention that linpack isn't _that_ demanding of bandwidth and latency, otherwise you wouldn't see all those clusters in the top ten. Or top 100 for that matter.
    • The VT cluster will probably never beat the EarthSim.

      Considering that they're not even trying to get to #1, that's a deep observation.
    • VT uses infiniband which is faster and lower latency than Myranet or the other common cluster interconnects

      A few things:

      This is Myrinet, not Myranet.

      Infiniband does not have lower latency than Myrinet, at least not at the MPI level. Using MX, I get 3.5us with Pallas with E cards, 4us with D cards, and there is no trick like polling only a few sources, or caching the memory registration.

      MX is not completely finished, but I will release a beta version this week so you can reproduce the numbers.

      Patrick

      • I didn't even realize there was a Myranet, I just misspelled Myrinet. It sounds like you have more first hand experience than I do but according to the numbers Mellanox claims (4.5us) and the numbers that Myrinet claims (6.3 s), Infiniband, at least on paper looks less latent.

        http://lqcd.fnal.gov/ib/ (half way down the page)

        Patrik
        • In the Fermilab web page, they say they compared the latest Topspin product against their 2 years old Myrinet B cards. Not exactely apples to apples. It's funny, they end up using Gigabit Ethernet in point-to-point, it was much more cost effective.

          There are a lot of thinks on paper with IB. The 2 last times I used tiny demo IB clusters that various vendors were evaluating, I saw 7.5us at the MPI, but I am very biaised too.

          On Myrinet, 6.3 us is with GM, 3.5 us is with MX (my baby). Same hardware, differe

  • Sheesh (Score:5, Funny)

    by HarveyBirdman ( 627248 ) on Monday June 21, 2004 @08:48AM (#9483646) Journal
    the famous Apple cluster didn't make the list, because it was out of service for hardware upgrades

    :-\

    In other news, Car & Driver released their list of top ten coolest cars. The new Ford GT was not included because Bob had it in the garage for an oil change.

  • And the top one is #10. Russia took 2 positions. Japan took 30+ positions. Germany took 30+ positions. United Kingdom took 30+ positions. France took 10+ positions. Well, Skynet still have a longway to go to take control of Russia.
  • Is it possible to run that Linpack benchmark on my gaming PC?

  • I am also surprised not to see my work on the list. But i was told for a few reasons why we are not on there.We didnt get all of our machines in on time for us to post and didnt want to make our larger facility look bad. Also i see the results only include just one machine and not the total computing power of all the machines as one cluster. I can tell you for a fact in the last top500 results we were in the 300 area but only tested just one of our nodes (SGI Origin 3000 512 cpus) and not the cluster. Its a
  • June 1994 (Score:4, Interesting)

    by Darth Cider ( 320236 ) on Monday June 21, 2004 @11:48AM (#9485652)
    Check out the June 1994 list [top500.org]. Ten years ago, supercomputers at about the 100th place on the list had gigaflop performance of today's desktops. Flashmob1 [flashmobcomputing.org], the University of San Francisco event in April that assembled a 180 gigaflop cluster in a single day, would have been at the number 1 spot. It's just cool to imagine the trend continuing, and it could, especially with wifi or wimax collective computing.
  • Public (Score:3, Insightful)

    by ManoMarks ( 574691 ) on Monday June 21, 2004 @12:43PM (#9486236) Journal
    These are the top 500 that we know about. What do you bet the NSA (and whatever the Chinese and possibly the Russian equivalents are) has at least 1 that is faster than all of these?
    • I doubt this.

      Over time secrets leak out. I've never heard of a goverment having more computing power than that commercially available. Maybe Moores law makes this impossible.

      Think about whatever the fastest cluster is now in 18 months it's probably going to double. 18 months is shorter than the usual procurement cycle for goverment!

      If there was some black op to produce goverment only HPCs where do they get their engineers? Somebody would have talked by now.

      I've no doubt that the NSA is making good u
    • These are the top 500 that we know about. What do you bet the NSA (and whatever the Chinese and possibly the Russian equivalents are) has at least 1 that is faster than all of these?

      Unlikely. There's a reason Earth Simulator has been #1 for like 3 years now. You can't just throw more hardware at the problem. You have to throw money, and design into it. Even secretive organizations like the NSA have their technical and even monetary limits.

      These secretive organizations used to probably have the fastes
    • The russians don't have the money to put together a toaster.

  • The odd thing is that in the AMD press release they note 30 AMD chips, but the top 500 site itself says 34 AMD processors [top500.org]. I wonder what the story is on the other 4.

    Paul Sundling
  • So if you look at rank 81 through 92, there's a lot of machines using Xeon 2.8s with gig-e. Rank 81 has 1030 processors, rank 92 has 650.

    Yeah, that's a difference of 380 processors, fairly close to 30%....... and Rmax is 2026 for all of them.

    Sure, this is one benchmark only, but damn, that must look bad when your extra 380 procs doesn't get any improvement.
  • Yea i would aggreee that those extra cpus arnt getting any acction. Shame my works 3232 cpus cant show off its potential with the kind of testing they have up. We need for better tests!!

The world will end in 5 minutes. Please log out.

Working...