Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
HP IBM Hardware

Top 500 Supercomputer List Released 167

sundling writes "The heavily anticipated Top 500 Supercomputer list has been released. There is a Sevenfold increase in AMD Opteron processors on the list. Two sections of an IBM prototype took spots in the top 10 and the famous Apple cluster didn't make the list, because it was out of service for hardware upgrades. When complete, the new IBM cluster is sure to take the top spot from the Earth Simulator."
This discussion has been archived. No new comments can be posted.

Top 500 Supercomputer List Released

Comments Filter:
  • IBM's Blue Gene (Score:5, Interesting)

    by zal ( 553 ) on Monday June 21, 2004 @08:27AM (#9483038)
    Last Thursday there was a little HPC Event by IBM at my University. And apart from the usual Balde Center for Scale Out Computing PR Blurb there also was a 1 Hour Presentation by one of IBM's Senior Strategy Analysts. What i found most interesting how they basically use embedded Processors for Blue Gene due to Cooling and Power Consumption Issues. He talked about Thermal Design, from the Basic Components right to where you compute Heat Dissipation for the whole room so you know where to put the very heat sensitive myrinet/infiniband components.
  • Google cluster? (Score:3, Interesting)

    by millwall ( 622730 ) on Monday June 21, 2004 @08:30AM (#9483054)
    Google cluter not in here? What do you reckon the performance/size of such cluster could be?
  • WWDC Power (Score:2, Interesting)

    by artlu ( 265391 ) <artlu@art[ ]net ['lu.' in gap]> on Monday June 21, 2004 @08:41AM (#9483104) Homepage Journal
    Maybe we can get everyone at the WWDC to use XGrid [apple.com] and break into the #1 slot for a brief second. Damn, i want Apple to take that spot.

    GroupShares Inc [groupshares.com] - A Free Online Stock Trading Community
  • by YouHaveSnail ( 202852 ) on Monday June 21, 2004 @08:47AM (#9483133)
    It's worth pointint out that if you're going to consider a given supercomputer to be "AMD" or "Intel" based on where the processors come from, then Virginia Tech's cluster of Apple Xserves is an "IBM" machine.

    That's not to take anything away from Apple. Both Xserve and the G5 towers that came before them are a great design, reliable, run a great OS, yada yada yada. But the chips are IBM.
  • + 65 for IBM (Score:5, Interesting)

    by freeduke ( 786783 ) on Monday June 21, 2004 @08:47AM (#9483135) Journal
    I have seen that there are 65 more IBM supercomputers in june than in october (jump from 159 to 224). I thried to figure out which computer those were, because it is an impressive gap: + 65 out of 500 in 6 month? Marketing gap?

    In October, HP was impressive, because they filled the bottom of the list with Itanium based superdome: they ranked those all on the same bench figures, that means that those computers were not benchmarked by the customers but by HP. That was a good oportunity for IBM: each time they could put one of their computers on the list, they were sure to throw an HP one out of it, so increase the gap by a factor of 2 (+1 for IBM, -1 for HP) with their main rival.

    So I am now wondering if this top500 list still means anything in term of performances and computing power, or is just a promoting tool, where manufacturers can conduct a war on market shares.

  • Re:Google cluster? (Score:4, Interesting)

    by nutshell42 ( 557890 ) on Monday June 21, 2004 @08:48AM (#9483140) Journal
    I'm just guessing here so sue me

    Google has an impressive cluster but it's optimized for storage and parallel page access.

    I don't think that you could use google's cluster to compute 42 without distributing the work by hand over the different servers because it wasn't built to do calculations but to answer page requests distributed over the different units and to be able to access the most complete mirror of today's web

  • Re:WWDC Power (Score:3, Interesting)

    by Talez ( 468021 ) on Monday June 21, 2004 @08:54AM (#9483173)
    Do it!

    Assuming an average 1GHz per person, 4 FLOPS per cycle (assuming you could get Altivec working flat strap), 70,000 people turn up that could work out to be... ummm.... 280 teraflops.

    You'd have yourself a Universe Simulator with that amount of power!

  • Re:Google cluster? (Score:5, Interesting)

    by pete-wilko ( 628329 ) on Monday June 21, 2004 @08:56AM (#9483181)
    Having heard a lecture from Jack Dongarra about HPC and the top 500, he mentioned that google declines to participate, as they wern't inclined to reveal their setup, or run the benchmarks for the top 500 which would mean putting their machines to other uses for the duration of the benchmark. If I rememeber though I think he said that at a guess if they did participate (based on the various 'guesstimates' out there of google's setup) that they'd easily make the top 10 if not pushing number 1. This is also leaving aside arguments over the role that the system is trying to fulfill (i.e. easily distributed work, like a search engine, vs work that can't be broken up easily like an earth simulator).
  • by afidel ( 530433 ) on Monday June 21, 2004 @09:14AM (#9483299)
    Actually the biggest reason is that the scene data is gigabytes in size and the machines need to be maxed on the RAM they can hold. My friend had a single texture on his senior digital film project that was larger than most systems ram (570MB IIRC).
  • by flaming-opus ( 8186 ) on Monday June 21, 2004 @09:28AM (#9483439)
    They measure with linpack, which only measures processor computational performance, but ignores memory, interconnect, and I/O performance.

    This is why the US government uses HPC challenge benchmark, in which Linpack is only one measure among eight.
  • by flaming-opus ( 8186 ) on Monday June 21, 2004 @09:41AM (#9483579)
    well, in a supercomputer OS, you really only have two choices. You can create a microkernel OS that runs on al the computation nodes, and does system calls to services nodes.

    you cluster together a bunch of monolithic kernels. At 8000 processors you aren't going to be able to use 1 monolithic kernel, so the distinction between a medium scalable OS like linux and a large scalable OS like solaris/irix is a bit of a moot point. 1000 OS images instead of 250? It's a nuisance either way.
  • by patrik ( 55312 ) <pbutler@killer[ ].org ['tux' in gap]> on Monday June 21, 2004 @09:46AM (#9483621) Homepage
    1) The VT cluster will probably never beat the EarthSim. Why? Because the interconnects (fancy network connections) are so specialized on EarthSim that it will tromp any off the shelf system. Furthermore everything about the EarthSim computers are built to be clustered as they are. VT uses infiniband which is faster and lower latency than Myranet or the other common cluster interconnects, which is part of the reason why it kicks so much butt, but the systems are still pretty much off the shelf and will never be able to beat EarthSim. Of course VT does for millions upon millions less and much more cost effectively, so even if it's not #1, in many ways it is the best.
    2) Google's cluster is (probably) a much more distributed system, it would probably take a severe beating in trying to do the LinPack benchmarks that they use to rank the top500. The algorithm requires a lot of data passing, it probably doesn't excel at low latency or even high bandwidth (>16Gb/s) data passing. That's just an educated guess though, AFAIK that information is pretty well secreted. In raw processing power under one roof Google probably has it made, but since most problems (not all, read: *@home) in science and math require lots of data passing between nodes Google will probably get trounced in the top500.

    Patrik
  • by freeduke ( 786783 ) on Monday June 21, 2004 @09:54AM (#9483724) Journal
    I found this on the Folding at Home site. It seems that they are running FAH on spare time and when you have a look at the statistics of team 446 [stanford.edu], you see that they are the first team, that they had 23721 CPUs active during the last 50 days...

    that tells more about "the beast". So far, I just can tell that it is made of linux clusters, containing about 12500 nodes, because in case of clusters you are facing bi processors systems 98% of the time.

    Here is the track, if someone wants to hunt the beast.

  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Monday June 21, 2004 @10:21AM (#9484004)
    Comment removed based on user account deletion
  • by serviscope_minor ( 664417 ) on Monday June 21, 2004 @11:35AM (#9484720) Journal
    A supercomputer would be really rather unsuitable for this kind of thing. The most important thing by far is having a really fat pipe, as opposed to a really fast computer. Besides, loosely coupled clusters with load balancing would do a much better job (for the cost) than a super computer since the (expensive) tight coupling a that super computer gives is unnecessary.

  • June 1994 (Score:4, Interesting)

    by Darth Cider ( 320236 ) on Monday June 21, 2004 @12:48PM (#9485652)
    Check out the June 1994 list [top500.org]. Ten years ago, supercomputers at about the 100th place on the list had gigaflop performance of today's desktops. Flashmob1 [flashmobcomputing.org], the University of San Francisco event in April that assembled a 180 gigaflop cluster in a single day, would have been at the number 1 spot. It's just cool to imagine the trend continuing, and it could, especially with wifi or wimax collective computing.

"A car is just a big purse on wheels." -- Johanna Reynolds

Working...