Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Hardware

Top 500 Fastest Computers 97

epaulson writes "The Top500 list has been released for the first half of 1999. The number one machine remains ASCI Red. The biggest Linux machine is cplant at 129, and Avalon is number 160. The list is a ranking of results from the LINPACK benchmark, which is a Linear Algebra code, so things like distributed.net and SETI@home don't count. "
This discussion has been archived. No new comments can be posted.

Top 500 Fastest Computers

Comments Filter:
  • by Anonymous Coward
    No, It's really inaccurate. I know of two Fujitsu computers at the Department of Computing, Imperial College, London, UK that are big enough to make the list.

    Thing is, people have to go and register themselves on the list & probably no-one knows it exists.
  • One of my friends told me it was for CEDAR, the document analysis/recognition research group, but I have my doubts. Perhaps they'll use it to find a remedy for Bell Hall's severe case of sick building syndrome. Ever spend a summer afternoon in there? Errrgghhhh ....

    (sorry, this won't make sense unless you go to SUNY Buffalo)
  • >A Hitachi or some other computers company got busted for "Dumping". That is, selling the good
    for less than it costs to produce.

    Of course, this is exactly what practically every Internet software company's strategy is--give away the product to gain market share.
  • Does this mean that there are zero supercomputeres in China? That sounds likely.
  • The original spec does call for the ability to
    run Win NT on the Teraflops machine. My manual
    describes the details on how to do so. In the good
    news, we did get Linux 2.0 running, although it never
    could access the MRC mesh interconnect.
  • If you're talking about the Java version (and I assumes your comment "using Navigator" means you are), the reason is simple. Windows Navigator Java has a JIT compiler, whereas the Linux version is plain-old interpreted (and horribly slowed by being hooked into Navigator's event loop, besides).

    I assume the benchmark would run on Kaffe and the new IBM Java, if someone wanted to see what difference a JIT compiler makes under Linux.

  • Posted by FrankGraphics:

    You'll find the kits at:

    http://www.associatedpro.com/apsprice.html
  • wow, the AMD *does* suck for FP... I got 6.25
    mflops in java with my p200MMX..
    (it could be the java interpreter though, I'm using netscapes on win98)
    ---------------
    Chad Okere
  • No, it just means there are zero supercomputers computers in China which have had bench marks submitted for them. There was a story on 60 Minutes last week about China buying serveral large SGI boxes. It's a big deal because everyone's afraid they'll use them for weapons research, but the state department said they wouldn't stop it, because they could always just connect a whole bunch of commidity PCs together (ala beowulf). Of course, when asked why they couldn't stop selling them various machine tools for making missles, they started talking about how old the machines are, and how they were to be used to build planes and blah blah blah.
  • Just ran the Linpack.java on the 203. fastest computer on earth (at least on the list) and it gave a whopping 8.3 MFLOPs. Guess java isn't the right tool for the job :)

    ask: java Linpack
    Mflops/s: 8.374 Time: 0.08 secs Norm Res: 1.43 Precision: 2.220446049250313E-16
  • by Anonymous Coward
    They are probably vector processors, and vector processors are good for LINPACK. (On Hitachi's homepage, they called it "pseudo vector processing".) Thus they have good Rmax/Rpeak ratio compared to other superscalar processor based computers.
  • ASCI Red was Intel's supercomputing division's last hurrah -- the king is dead; long live the king.
    Back in November, one vendor (don't remember if it was SGI or Sun) earned itself some animosity when its marketing department counted the number of entries it had in the Top500 and declared itself the leader in the supercomputing industry.
    BTW, try using a table to get your columns to line up.
    Christopher A. Bohn
  • Ah, but supercomputers fill the niche of "performance at any cost." The notion of price-performance wasn't heard in HPC circles until the Beowulf project in 1994.
    And you're absolutely right about the scaling of applications. But the use of the LINPACK benchmark was a necessary decision to make the list accessible to most anybody who had a supercomputer, despite the benchmark being obsolete.
    Christopher A. Bohn
  • Sorry, I don't go to SUNY Buffalo. I'm in Buff State sweating cause NY-MO just turned off the A/C. Maybe they are trying to find a cure for the 8% sales tax, what to do with the Aud, and how do we get a decent radio station around here?
  • I've actually run the program on both Linux and Windows using Navigator. For some reason, I get better marks under Win98.. It may be because of something I did wrong under Linux, but I don't know. Any takers?
  • Just thought I'd add a little comment... Dr. Jack Dongarra is a Distinguished Professor of Computer Science and the University of Tennessee and Oak Ridge National Labs. He also runs (or co-runs, I can't remember exactly, at the moment) Netlib, one of the co-sponsoring agencies of the Top 500... Just thought I'd say Go Vols! :)
  • Posted by FrankGraphics:

    "The market for such CPUs would be very small, maybe a few hudred per year"

    Actually there would be a heck of a lot more demand for such a CPU if the production cost could be economical enough for let's say CG or games.

    "Needless to say, there would also be many technical difficulties. Feeding thousands of registers would require a very wide memory architecture, a few thousand bits might be a good start. "

    Sony's next generation "Play Station" has a 2000+ bit bus. So the technical difficulties aren't really a stumbling block.

    "SIMD can be used effectively only when there is one operation that is done to a big array of data. eg you have an array of 1024 bytes, and you want to increase the value of each byte by one. However, not all code is like this. "

    True but you could use more execution units that can control several registers the ratio could be something like 2:1, 3:1, 4:1,...etc. The optimization of wafer real-estate verses execution cycles becomes the issue. Wafer size can be increased to accomodate more processing power which BTW is more power effcient than these clusters of PCs(Sarcasim) being used today.

    The technology that comes to mind to at least experiment with such architectures would be FPGAs. I think I read somewhere were FPGAs were being used by a company to produce a massive parallel processing system.

    Something to think about also is the needed R&D for such systems could come from talented engineering students working on MS or PhDs. They could use electronic CAD systems on campus to produce the liths for CPUs and have Intel or Motorola grow the chips. This would reduce the cost of R&D by billions. Giving access to Intel's or Motorola's facilities that are manufacturing chips anyway wouldn't cost much. Remember these companies only grow the chip. The Companies that support such a program get first rights of refusal for any new CPU architecture produced by a student. Almost like open source for hardware. Just a thought.
  • The hitachis are vector CPUs. You don't have much
    use for them, but they sure do run optimized linear alrebra codes fast. Most other machines
    on the list use commodity CPUs. Cray used to
    have a bunch of vector CPUs in MPP configuration,
    but now they just use Alphas. Doing a separate
    CPU line for supercomputers is just too expensive.
    Let's hope the vector stuff is gonna make a
    comeback in next generation "multimedia" instructions or something like that.
  • > I read somewhere that american companies are _way_ out ahead in dealing with the millenium bug, followed by the UK & Germany, then rest of Europe, then Japan.

    I believe Scandinavia and the Benelux region is in fact ahead of Germany when it comes to dealing with the Y2k problem, on par with the USA and the UK.

    When it comes to IT usage, Sweden and Finland is in fact #1 and #2 in the world when you count per capita. In Sweden over 50% of the population used the Internet last month. Mobile telephone usage is WAY ahead. There is in fact a lot of interesting research done in northern Europe when it comes to wireless communication, for instance Bluetooth, and other high-tech areas. But American media is a bit bad on reporting on non-american news....

    Do I have an inferiority complex? Hell yeah. :-)
  • First, everyone notice number 3? "Classified" location, owned by the "Government". I bet my left nut that it's sitting in a bunker at Fort Mead working on a way to violate our privacy.

    Secondly; is Blue Mountain completely up to speed yet? I seem to remember reading that it was going to be the fastest (albiet not by much) when all the processors were finally added. I dunno, maybe I was just smoking something or reading SGI press releases....

    ----

  • Yeah, I have an account on the 13th fastest computer in the world. Nothing quite as fun as sitting down and allocating 512 Processors to work on the formation of a planetary system. The Unicos/MK OS on Cray's is a very interesting example of microkernel design that is well done. Of course, the optimizing compilers on that baby are even more astounding.

    neutrino
  • A think that a computer's power is reflected by its use, not by megaflops.

    The results count, not only the potential.

    Then the top 500 is useless, since it doesn't describe the task.

    We should search for the most useful computer in the world. That would lead to a great debate.
  • I bet my left nut that it's sitting in a bunker at Fort Mead working on a way to violate our privacy.

    You're likely one step away from eunuch-hood. I doubt if the NSA computers you're thinking of are on this list--or even run linear algebra software, for that matter. Those "classified" machines are in all likelihood simulating nuclear reactions and other defense-related tasks.

    -Ed
  • So, what sort of numbers would a uniprocessor machine spit out under the linpack benchmark?

    (i.e. my cyrix 166)
  • Well, when you consider that 129 on the list is a 400 node 500MHz alplha cluster, I don't think you would find numbers low enough to represent the average desktop PC :)

    --
  • We made the list!!! Hurray, something besides snow!!! I don't think the UB site is finished yet. I'll bet that those government classified lisings are not there just to run a few stockpile calculations and a screensaver. I have some opinions, but I will keep them to myself.
  • Don't know if I'm reading it correctly, but the linpack benchmark gives 61 Mflop/s for a 350HHz K6-2 Seems a little high to me?
  • This list is accurate at what it is trying to measure. The list is for general-purpose supercomputers and so this eliminates a number of machines that are much faster for dedicated tasks. The NSA, for instance, has machines that are very good at the task of factoring. These machines, however, are not capable of performing general tasks, and thus are not candidates for this list. In addition, you may say that there are other general purpose supercomputers out there that are faster, but not on the list. This is doubtful, as the machines have to be made by someone with experience at the rather specialized task of making ultra-fast systems. All of the major producers of supercomputers recognize the importance of such benchmarks and thus work with people to do Linpack tests. The importance of these benchmarks is that the company that produces high results will get more sales. The only computers now left out of this list would be those produced in secret by governments or other such entities. These groups fall behind because they do not have the existing knowledge and resource base needed to produce such computers. The end result: This list is quite accurate with perhaps a few Ultra Secret machines left out.

    BTW, I work on two of the machines on this list.

    neutrino
  • Silly rabbit, everyone can agree on this one:

    The most useful computer in the world is the one you use.

    ----

  • I'm very surpized at how many Industry computers are on the list and how early they dominate. From 21 on industry starts to make a strong showing. For many years Floating Point super computers were strictly reasearch/university material...
    "There is no spoon" - Neo, The Matrix
    "SPOOOOOOOOON!" - The Tick, The Tick
  • by rillian ( 12328 ) on Friday June 11, 1999 @05:25PM (#1853776) Homepage
    I just went and got the 1000x1000 double precision benchmark from netlib.org [netlib.org]. I grabbed the lapack library and g77 from the debian website (Debian 2.1/slink versions)

    On my 400MHz K6-2, I get 16 Mflops without optimization, 20 with -O3. Not quite what was listed in the performance document [netlib.org], but that might have been with a hand-tuned library.

    For comparison, my home machine (a 300 HHz K6-2) gets 13 Mflops unoptimized, 20 with. It's running Debian 2.2pre/potato which uses egcs, so the optimization is probably better. Both machines have 100 MHz fsb and 1 MB L2 cache.

    There's a fun java version [netlib.org] on the LINPACK benchmark as well. I get 1.4 Mflops. :)
  • Then why isn't it done all time?
    Because the fastest is not the most useful.

    The computer is not a single isolated piece of hardware, it is connected with devices and used by PEOPLE, and you can't move them around that easily.
  • Where do I get one of these fine computers, do you think best buy carry them ?? :)


    -----
  • I'd LOVE to put Windows 98 on ASCI Red !

    how many teraflops would we get, then? :)
  • by gavinhall ( 33 )
    Posted by FrankGraphics:

    Why use thousands of antiquated CPUs? Wouldn't it be better to build a multi register CPU similar to MMX but on a scale in the thousands? So what if the chip density isn't .18 microns, say it's .35 and the wafer is one inch square. The added benefit of proximity and architectural design along with the new machine code(software) would make ground breaking research. This use of clunky CPUs doesn't impress me. The Mercede will use multiple registers and execution units and will do more for the development of parallel processing software research than any of the listed supercomputers!
  • According to IC's web site [ic.ac.uk], they have one Fujitsu VX/1 vector supercomputer and one Fujuitsu AP3000 massively parallel server, which is effectively a nest of 48 independent UltraSPARC systems linked together Beowulf style.

    But when you look up Fujitsu on the Top-500 database, it turns out that only the vector supercomputer (VPP) series make the list, and none of their AP-xxxx systems.

    For the VPP series, the entry-level for the top 500 is a twelve processor system rather than IC's single vector processor.

    On the other hand, the AP-3000 would have enough total throughput at 45.6 Gflops to get on the list at number 172. But my guess is that it can only achieve that for problems that split into relatively big independent chunks.

    That might be OK say for servers and big CFD models, but I suspect that the LINPACK test suite needs a much more fine-grained parallelisation, and would be much harder hit by communication latencies between nodes.

    That's just a guess: perhaps any real supercomputer experts out there could say whether this sounds right ?
  • The compilers are great. Unicos/MK isn't bad, but can be killed by user processes doing too much IPC. OS IPC should have been given a better priority. That was 2 years ago, it may be fixed by now.

  • by extrasolar ( 28341 ) on Saturday June 12, 1999 @06:24AM (#1853787) Homepage Journal
    Also, how did Linux get on this list if they can't even beat NT running a measly 4 processors???

    Even though I believe they modified the souce of Linux to run on all them processors, it is one of the advantages of Linux.

    I am awaiting a press release from Redmond.

    --

  • If you go to the very end of the LinPack ratings you'll see that the AtariST beats the MacIntosh as well as the IBM 286/287 combo

    I knew it :-)
  • Actually, these days the critical thing is the memory access pattern. Espeically with things like ASCI Red which are really in some sense more clusters than MPP machines.

    In truth, ten years from now we're not going to be doing MFLOPS to assess machine performance, or opcounts to assess algorithm performance. We're going to be looking at the number and pattern of memory accesses. Some of the IBM Power3 processors can churn out something like 4 or 6 double precision flops per clock. The question is getting a main memory bus that will *read* 96 bytes and *write* 48 bytes per CPU clock, in scatter gather, with no latency. Such a beast doesn't really exist, so Things Get Interesting.

    How fast you can do something these days has a lot to do with how well you can parallelize it.
  • actually on linux and irix its pretty comparable.
  • Blue Mountain's up to speed, but of course things are always being tweaked and reconfigured. Blue Pacific, however, has issues.
  • As you would expect:

    1) By number of computers on the list, the top 6 countries are G7 members;

    2) The G7 all have a computer in the top 47, while no non-G7 state has a computer in the top 52.

  • First, everyone notice number 3? "Classified" location, owned by the "Government". I bet my left nut that it's sitting in a bunker at Fort Mead working on a way to violate our privacy.

    well, i can't specifically comment on the machine that holds third place, but i work for a sometimes defense contractor, and we have some very fast machines here that the government owns (ie the government paid for them, so they belong to the government) that are used for simulations, like another poster said. the nature of the simulations is classified, so i won't go into that (though it's hardly as nefarious as you might think). i can tell you, however, that these computers do not spend their time trying to crack encryption keys or other things to "violate our privacy".

  • That machine had LOTS of chess-specific hardware acceleration.
  • by gavinhall ( 33 )
    Posted by FrankGraphics:

    Just saw a FPGA kit that slips into a PCI slot in a PC for some US $300. I hear some FPGA chips have up to 1,000,000 gates and operate at 200 MHz! Given that it would take two gates to produce one register for one bit, I figure the possibility of a 32 bit processor with 200 or maybe even 500 active registers with a humble instruction set of 17. XOR, OR, AND, conditional jump, jump, Shift Right, Shift Left, Integer Math: Add, Sub, Multiply, Divide, Floating Point Math: Add, Sub, Multiply, Divide, Load from memory and Block Transfer. Pretty basic but should get the job done. Each register will have it's own execution unit. Now the number of cycles to complete an operation should range from one to four with the mean being about 2. Lets see 200 MHz average 2 cycles per op that's 100 million times five hundred, that's 50 billion instructions per second! Give me a few months and My supercomputer will be on this list and will only take a thousandth of the space of those other monsters!
  • Um, number 16 or 19 (can't remember) on the list was in Chippewa Falls, WI. The only important thing I can think of in Chippewa Falls is...Leinenkugel's Brewery! Maybe those of you who aren't from the Upper Midwest aren't as familiar, but maybe they do a whole lot more volume than I thought...hehe. Being a Minnesotan, I think that Leinie's is close to the only useful thing to come out of Wisconsin. On a side note, 14 of the top 500 are in good old Minnesota, mostly because of the University and Cray. That's a pretty decent total for one state. And the U has some pretty kickin' machines it would seem.
  • Where did you see this nifty device?

    Just curious . . .

    If you don't want to clutter /., anybody with
    nifty info on FPGA stuff, mail me at
    johng@NOSPAM.eng.auburn.edu

    And, of course, s/NOSPAM//

    (Isn't it sad we have to mangle?)
  • Posted by Napalm4u:

    i think it uses some kind of unix based system. Besides win98 would make it take an hour just to start up.

    besides what about that chessplaying computer by ibm? Deep blue the one who beat the russian by 1 game, how fast was that?
  • Deep Blue is a member of IBM's SP family. IIRC, it was more souped-up (suped-up? :-) than a standard SP-2 but not quite up to par with ASCI Blue Pacific.
    Christopher A. Bohn
  • by EngrBohn ( 5364 ) on Friday June 11, 1999 @06:41PM (#1853802)
    A machine cannot be included on the list if the owners don't submit the LINPACK results for consideration.
    Christopher A. Bohn
  • It's a shame. Sure was nice for 7 months to be able to point out that a Linux cluster was one of the 100 fastest supercomputers (C-Plant was ranked #92 on November's list). We'll just have to wait until the port to ASCI Red is finished (I seem to recall a University team (UVA?) was working on that. Of course, I could be completely mistaken.
    Christopher A. Bohn
  • Not sure where to get everything, but all the code is freely available. Oh wait, try here:

    http://www.netlib.org/linpack/

    I once did some optimization of the BLAS on some old SPARCs. We were able to more than double the performance by unrolling the loops into blocks that would fit the cache (hand tweeking the assembler). Which makes me wonder, most optimizing compilers do a fair amount of this sort of thing themselves...could this list have as much to do with compiler tricks as it does the raw speed of the machine?

  • That I think it is pretty cool that you can get an account on the 129th (CPLANT) fastest computer, if your work is valid. I just think that is too frigg'n cool.
  • "The only computers now left out of this list would be those produced in secret by governments or other such entities." - not true. I've worked on a number of systems that would spec on the list and aren't there.. Why? A lot of major corporations don't like to tell competitors what they are using.
  • Okay, I was bored, so I went through and counted some stuff:

    The numbers won't add up correctly because several of the machines were credited to two co-builders. Or I could have made a mistake.


    Company: total, # out of the top 10, highest rank

    (I tried to make this line up but apparently /. won't let me).

    SGI: 182/500, 7/10, #2
    IBM: 118/500, 1/10, #8
    Sun: 95/100, 0/10, #54
    H/P: 39/100, 0/10, #150
    Fujitsu: 23/500, 0/10, #26
    NEC: 18/500, 0/10, #29
    Hitachi: 12/500, 1/10, #4
    Compaq: 5/500, 0/10, #49
    Intel: 4/500, 1/10, #1
    Self-made: 3/500, 0/10, #129
    SNI: 2/500, 0/10, #66
    Tsukuba: 1/500, 0/10, #18
    Siemens: 1/500, 0/10, #355


    This ranking above looks very different than the ranking of the top five computers. For example, Intel, who is #1, is basically a non-factor in the supercomputer market, with a mere three other computers on the list. H/P and Sun, which don't even make the top 50, seem to have the mid-level supercomputer market locked up, with 134 computers between them. SGI, however, is still the undisputed leader, from the high end (7/10) to the mid and low ends of ths list.
  • A Hitachi or some other computers company got busted for "Dumping". That is, selling the good for less than it costs to produce. This drives others out of business and leaves the market for the Dumper. Of course, they have to be able to still stay in business while "dumping". When your Hitachi you still have other products that are very profitable.
  • Wouldn't it be better to build a multi register CPU similar to MMX but on a scale in the thousands?

    Intresting idea, but it does have it's flaws. For one, designing a new CPU is _really_ expensive. And as you add more parallelism, it gets even more complicated and expensive (look at Merced). The market for such CPUs would be very small, maybe a few hudred per year. As you may have noticed, even supercomputers are made as cheap as possible these days (eg. beowolf).

    Needless to say, there would also be many technical difficulties. Feeding thousands of registers would require a very wide memory arcitecture, a few thousand bits might be a good start. I sure wouldn't want be the engineer responsible for designing a mobo for those CPUs..

    Few architectural problems also. SIMD can be used effectively only when there is one operation that is done to a big array of data. eg you have an array of 1024 bytes, and you want to increase the value of each byte by one. However, not all code is like this. You might want to inrease the value of the first elemnt by one, the second element by two and so on. MMX just became useless, there is no paralelism here. Now we have a CPU that is working at a fraction of its full potential: of the 2000 or so registers, only two are used. There is other stuff too, but I lazy so..

  • >Guess java isn't the right tool for the job :)

    It's not the language that's the problem, it's the fact that you're running the program via an interpreter (aka your Java VM). Compile the Java to native code and you should get better results.
  • ...I doubt if the NSA computers you're thinking of are on this list--or even run linear algebra software, for that matter.

    Or exist on any lists, anywhere, ever. Their big iron's existence is classified, let alone specs or performance benchmarks. And you can bet your ass that NSA's top five machines would take spots 1 through 5 on this list, if they existed. ;-)

    Well, ok, maybe not on this list, due to the software differences, as edhall noted. But you know what I mean.
    ----------------------

  • Anybody know about highly-pentium2 optimized lapack/blas libraries? Anybody know if compiling
    the BLAS under pgcc/pg77 breaks the test cases? Are there binaries in rpm format?
  • IIRC it's a 32 node SP2 with special hardware for chess lookahead. Not so useful for floating point, but a great publicity stunt for IBM.
  • by Trepidity ( 597 ) <delirium-slashdotNO@SPAMhackish.org> on Friday June 11, 1999 @07:45PM (#1853822)
    Okay, I was really bored, so I did more stats. This time by country.


    USA: 292/500, 7/10, #1
    Japan: 56/500, 1/10, #4
    Germany: 47/500, 0/10, #15
    UK: 29/500, 2/10, #7
    France: 18/500, 0/10, #47
    Canada: 8/500, 0/10, #29
    Sweden: 7/500, 0/10, #71
    Netherlands: 6/500, 0/10, #146
    Switzerland: 6/500, 0/10, #339
    Italy: 5/500, 0/10, #36
    Australia: 5/500, 0/10, #102
    Korea: 3/500, 0/10, #78
    Denmark: 3/500, 0/10, #275
    Belgium: 3/500, 0/10, #286
    Spain: 3/500, 0/10, #314
    Finland: 2/500, 0/10, #53
    Norway: 2/500, 0/10, #193
    Austria: 2/500, 0/10, #392
    New Zealand: 1/500, 0/10, #64
    Luxembourg: 1/500, 0/10, #247
    Mexico: 1/500, 0/10, #436


    Summary: United States 292 vs. Everybody Else 208.

    In the top ten, it's United States 7 vs. Everybody Else 3.

    If you compile the stats by the country in which the corporation that made the computer is based, American companies are responsible for over 400 of the top 500 supercomputers (just about everything except the Japanese stuff).

  • Anybody know what kind of processors the Hitachi supercomputers at #4 (128 processors) and #12 (64 processors) get? They seem to be the fastest per-processor for LINPACK...

  • Hmm... bored indeed! ;-) Interesting stats, though.

    The most surprising part of the list IMO is the Hitachi showing. They made a supercomputer with only 1/10th the number of processors used in the computers near its 4th place and 12th place positions. And their supercomputers are the only ones in the top 500 with a single-digit number of processors (4). 11 of the 12 are in Japan, though! Shouldn't a lower number of processors reduce the price tag in a major way? Why aren't they in the US?
  • The MFLOPS/$ for a processor goes down for the top end. The problem is that you can't just stick 1000 CPUs in a computer and then say it's 1000 times faster than a single CPU. For some problems, it might not be faster at all.

Do you suffer painful illumination? -- Isaac Newton, "Optics"

Working...