Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
IBM Hardware Science

IBM Sets Supercomputer Speed Record 308

T.Hobbes writes "IBM's BlueGene/L has set a new speed record at 36.01 TFlops, beating the Earth Simulator's 35.86 TFlops, according to internal IBM testing. 'This is notable because of the fixation everyone has had on the Earth Simulator,' said Dave Turek, I.B.M.'s vice president for the high-performance computing division. The AP story is here; the NY Times' story is here."
This discussion has been archived. No new comments can be posted.

IBM Sets Supercomputer Speed Record

Comments Filter:
  • Damn, (Score:3, Funny)

    by scorp1us ( 235526 ) on Wednesday September 29, 2004 @07:51AM (#10382375) Journal
    I wish I knew what a Tecord was...

    Maybe /. shoud be using automatic text-box spell checking found in KDE...
    • Re:Damn, (Score:4, Funny)

      by sgant ( 178166 ) on Wednesday September 29, 2004 @08:10AM (#10382527) Homepage Journal
      Imagine a Teowulf Tluster of these!
    • Re:Damn, (Score:3, Funny)

      by cybergrue ( 696844 )
      It gets better.
      Due to the slashdot bug in Firefox, the second line of the summary reads

      eating the Earth Simulator

      The b was hidden under the dark green of the sections field on the upper left.
      Now thats an impressive feat of computing power.

    • Re:Damn, (Score:4, Funny)

      by jayayeem ( 247877 ) on Wednesday September 29, 2004 @08:16AM (#10382577)
      Problem is, running a spell checker would have used .16 teraflops of the machine's capacity and cost it the record.

      • In Win 2000 this might well be true. When I ran a test on the a speed of my latest program on a 2 GHz machine, the result was in a ratio of 2.2 to 2.7 with the task manager running in the former case. The task manager consumed about 20% CPU when the system was busy.

        I realize a spell checker ought to stop as soon as it finishes scanning while task manager never stops.

        A supercomputer class speed checker would undoubtedly be fully blown, in the industrial strength class. Such software wouldn't just do the ga
    • by Anonymous Coward
      Wouldn't help. If they had used a KDE spell checker, it would have been changed to 'rekord'.
    • Maybe slashdot should be using editors who know how to spell and who are willing to proofread story submissions before clicking Submit. This has the benefit of being a cross-platform solution :P
  • Tecord? (Score:5, Funny)

    by holzp ( 87423 ) on Wednesday September 29, 2004 @07:51AM (#10382382)
    Is that how they measure Records in Teraflops?
    • Aliasing (Score:2, Informative)

      by PingPongBoy ( 303994 )
      When you have a machine this fast the sampler cannot keep up. As a result what you see is a total distortion of reality. You may have seen this phenomenon with wagon wheel spokes rotating backward when the cart is really rolling. Thus when you read "speed record at 36.01 TFLOPS" your eyes view the letters going backwards and forwards.
  • by JPelorat ( 5320 ) on Wednesday September 29, 2004 @07:52AM (#10382400)
    Someone call Huinness' Nook pf Eorld Tecords!
  • by kai.chan ( 795863 ) on Wednesday September 29, 2004 @07:53AM (#10382404)
    I'm sorry, but those Supercomputers have nothing on my machine running Windows. It has a record of AlwaysFlops.
  • It might be fast, but could it keep up with monitoring all the errors and dupes on /.? ;P
  • Tecord? (Score:5, Interesting)

    by el_benito ( 586634 ) on Wednesday September 29, 2004 @07:54AM (#10382412) Homepage
    A new tecord?!? That's timpossible! But more seriously, does anyone know if there's an impartial 3rd party that ever confirms these measurements? I'm all for improving technology, but how do they verify their "tecords"?
    • Re:Tecord? (Score:5, Informative)

      by hackstraw ( 262471 ) * on Wednesday September 29, 2004 @09:03AM (#10382971)
      I'm all for improving technology, but how do they verify their "tecords"?

      The top500 [top500.org] tecords are submited on an honor system. Most of the systems are thrown together with known processors and interconnects where the tesults should "make sense". Also, the systems teport their theoretical max performance and a measured tesult. It would be pretty hard to fudge a score for the top500 by much without many people questioning it. From this page [top500.org] the top500 people say:
      While we make every attempt to verify the results obtained from users and vendors, errors are bound to exist and should be brought to our attention.
      Its kinda like any tesearch field. Most people are honest, but anomolies can and do happen, and they are usually found out by others in the field. Two of the most tecent scientist scandles involved the guy from Bell labs, Hendrik Schön, who was found falsifying data, and he was fired, and I believe that he also lost his PhD. The other is from the US government funded tesearch on MDMA by George Ricaurte. Although I believe that nothing really happened in the Ricaurte case.
  • by The-Bus ( 138060 ) on Wednesday September 29, 2004 @07:55AM (#10382419)
    Hete's the full text in case of a massive slashdotting of theit setvets:

    IBM says Blue Gene bteaks speed tecotd
    9/29/2004, 7:27 a.m. ET
    By ELLEN SIMON
    The Associated Ptess

    NEW YOtK (AP) - IBM Cotp. claimed unofficial btagging tights Tuesday as ownet of the wotld's fastest supetcomputet.

    Fot thtee yeats tunning, the fastest supetcomputet has been NEC's Eatth Simulatot in Japan.

    "The fact that non-U.S. vendot like NEC had the fastest computet was seen as a big challenge fot U.S. computet industty," said Hotst Simon, ditectot of the supetcomputing centet at Lawtence Betkeley National Lab in Califotnia.

    "That an Ametican vendot and an Ametican application has won back the No. 1 spot -- that's the main significance of this."

    Eatth Simulatot can sustain speeds of 35.86 tetaflops.

    IBM said its still-unfinished BlueGene/L System, named fot its ability to model the folding of human ptoteins, can sustain speeds of 36 tetaflops. A tetaflop is 1 ttillion calculations pet second.

    Lawtence Livetmote National Labotatoty plans to install the Blue Gene/L system next yeat with 130,000 ptocessots and 64 tacks, half a tennis coutt in size. The labs will use it fot modeling the behaviot and aging of high explosives, asttophysics, cosmology and basic science, lab spokesman Bob Hitschfeld said.

    The ptototype fot which IBM claimed the speed tecotd is located in tochestet, Minn., has 16,250 ptocessots and takes up eight tacks of space.

    While IBM's speed sets a new benchmatk, the official list of the wotld's fastest supetcomputets will not be teleased until Novembet. A handful of scientists who audit the computets' tepotted speeds publish them on Top500.otg.

    Supetcomputing is significant because of its implications fot national secutity as well as such fields as global climate modeling, asttophysics and genetic teseatch.

    Supetcomputing technology IBM inttoduced a decade ago has evolved into a $3 billion to $4 billion business fot the company, said Simon.

    Unlike the mote specialized atchitectute of the Japanese supetcomputet, IBM's BlueGene/L uses a detivative of commetcially available off-the-shelf ptocessots. It also uses an unusually latge numbet of them.

    The tesulting computet is smallet and coolet than othet supetcomputets, teducing its tunning costs, said Hitschfeld. He did not have a dollat figute fot how much lowet Blue Gene's costs will be than othet supetcomputets.

    Howevet, othet supetcomputets can do things Blue Gene cannot, such as ptoduce 3-D simulations of nucleat explosions, Hitschfeld said.
    • Being innerested in protein folding I saw an article on Blue Gene being invented to be used for simulating the folding of a protein with the full use of quantum mechanic calculations. The computer was to be so fast it would take just one year of execution time to simulate a full fold.

      A few years the fastest supercomputers were being built to simulate atomic explosions including the first computer to break the teraflops barrier [sciencenews.org].

      The Earth Simulator was built for peaceful purposes. Blue Gene is in name motiv
  • Place your bets (Score:5, Insightful)

    by glpierce ( 731733 ) on Wednesday September 29, 2004 @07:55AM (#10382420)
    Place your bets, people!

    What percentage of posts in the first 15 minutes will be about the spelling of the last word in the title, and what percentage about the content?
    • When I first checked, I initially thought it was all but one, that one being a repost of the story in case of a slashdotting... then I realized that the story had been filtered as per s/r/t/i and couldn't stop laughing...
      -N
  • by fib2004 ( 753099 ) on Wednesday September 29, 2004 @07:56AM (#10382423)

    "I want to play chess against that one" - Kasparov

  • by halivar ( 535827 ) <bfelger@gmai l . com> on Wednesday September 29, 2004 @07:56AM (#10382426)
    ...what operating system it uses. Anybody know?
    • by joib ( 70841 ) on Wednesday September 29, 2004 @08:02AM (#10382480)
      ...what operating system it uses.


      It's a sort of two layer system. The compute nodes (2 cpus per compute node) run a IBM proprietary, very small and simple, kernel. 64 compute nodes are managed by an i/o node running Linux.
    • Actually I do think its linux. I live in Rochester and know some of the IBMers.

      I wonder if they let normal people see this thing? I'll ask.
      • Normal people have seen the currently listed BlueGene/L, which is number 4 on the Top 500 list now. The currently listed machine is smaller (4096 compute nodes in four racks) and slightly slower than the final production version is supposed to be.

        It's also been on local TV news, and in some of the newspapers.

        You can understand that if there is something bigger floating around, we're not exactly allowing photographers in. :-)
    • by BlurredOne ( 813043 ) on Wednesday September 29, 2004 @08:07AM (#10382509)
      A quick Google search has netted the following: OS - Linux, HPK (High Performance Kernel) Complilers - Fortran95, C99, C++ Math Library - a subset of ESSL If you would like to read the article, it can be found at http://www.llnl.gov/asci/platforms/bluegene/talks/ gupta.pdf [llnl.gov]
    • by bhima ( 46039 ) <Bhima,Pandava&gmail,com> on Wednesday September 29, 2004 @08:12AM (#10382545) Journal
      I've been facinated with this thing ever since I discovered it was using processors I could actually write assembly (and C) for. Each node is running an embedded linux kernel.

      Here's a bit more: each node has 2 cpus and 4 fpus, custom non-preemptive kernel

      application program has full control of all timing issues kernel and application share same address space

      kernel is memory protected

      kernel provides: program load / start / debug / termination file access all via message passing to IO nodes

      I could go on and on but it's all on Blue Gene's site http://www.research.ibm.com/bluegene/index.html [ibm.com]

      I can't resist adding that GCC won't use the second FPU on each die...

      • by joib ( 70841 ) on Wednesday September 29, 2004 @08:33AM (#10382707)

        Each node is running an embedded linux kernel.


        No.


        each node has 2 cpus and 4 fpus, custom non-preemptive kernel


        I see a contradiction with your previous statement here... :) Luckily, you got it right this time.

        As I said in my comment above, the compute nodes run an IBM proprietary kernel (apparently the kernel you're describing), and every 64 compute nodes are managed by an i/o node running Linux.


        I can't resist adding that GCC won't use the second FPU on each die...


        So what's the problem? It's not like anybody who could afford a highly specialized and expensive machine like this one couldn't afford to shell out some $$$ for xlf.

        Anyways, I'm sure that if this modified PPC core gets popular outside multi-million dollar supercomputers, the gcc team will figure out how to utilize the second FPU.
        • I don't see anything wrong with embedded & non-preemptive it's not like the entire embedded world runs hard real time kernels, I don't on the PPC I use.

          Actually being that lifted every single fact from IBM's blue gene website (which I linked to) I feel comfortable saying there is no contradiction and that IBM is running a custom non-preemptive kernel, just like they said they did.


          • I don't see anything wrong with embedded & non-preemptive it's not like the entire embedded world runs hard real time kernels, I don't on the PPC I use.


            I agree, but I didn't say anything about that topic. ;-)


            Actually being that lifted every single fact from IBM's blue gene website (which I linked to)


            What a coincidence, I also have read a lot of the material on that site.


            I feel comfortable saying there is no contradiction


            The contradiction I pointed out was that first you said that each no
            • OK now that I'm not a work lets try this again as I misspoke in my first post and then totally misunderstood your reply.

              The example Blue Gene/L implementation begins with dual PPC440 chips (each die with dual FPUs), A Compute Card contains 2 of these chips, some number of Link Nodes and 512 meg of DDR RAM. There are 16 Cards to a Compute Node. Each Compute Node contains a IO Node. There are 32 Compute Nodes to a cabinet.

              Each Compute Card runs "CNK", an IBM in house kernel written in C++ and is connec

      • I can't resist adding that GCC won't use the second FPU on each die...

        GCC has never been considered an excellent optimising compiler. Its good enough for basic system tools, the kernel, etc. But when performance matters you use a compiler that comes from your CPU vendor. Typically an archetecure specific compiler can yield 100% speedup and beyond over GCC. Its too bad AMD does not ship a compiler.
  • 36 TFlops ? (Score:5, Interesting)

    by MadX ( 99132 ) on Wednesday September 29, 2004 @07:56AM (#10382427)
    I wonder if that is sustained ??
    I know that when the Mac G5 Cluster was developed they claimed tremendous speed, but when the sustained rate was calculated, it turned out to be much lower ...

    • It's the result of the linpack benchmark, i.e. the number Rmax in the top500 list, as opposed to Rpeak which is a theoretical estimate of peak performance.

      I have no idea of what you mean by sustained.
      • Re:36 TFlops ? (Score:2, Informative)

        by kfg ( 145172 )
        I have no idea of what you mean by sustained.

        He is refering to the fact that horsepower has a time componant. It's only in rare conditions that you're interested in the instantaneous force a horse can apply. What you want to know is how much work you can get out of it per day.

        A cheetah may be able to sprint to 100 kph, but I'll out distance it in 10 minutes driving my car at only 80 kph.

        Human hunters on foot can only run about 15 kph, but can chase down large prey that can run 65 kph, because the human
    • Re:36 TFlops ? (Score:5, Informative)

      by Henriok ( 6762 ) on Wednesday September 29, 2004 @08:35AM (#10382730)
      The peak of VTs System X cluster was about 17 Tflops, and the sustained rate was just over 10 (which rendered it the third place on the Top500 list). This peak/sustained ratio is significantly less that Earth Simulator's 90% efficiency, but compared to the cost it's extremely cost effective. ES cost 100 times more but have just 3 times higher sustained rate.
      • by Troy Baer ( 1395 ) on Wednesday September 29, 2004 @09:49AM (#10383340) Homepage
        The peak of VTs System X cluster was about 17 Tflops, and the sustained rate was just over 10 (which rendered it the third place on the Top500 list).

        Except that it's not on the most recent Top 500 list [top500.org] anywhere.

        Remember how Va. Tech replaced all 1100 G5 nodes with G5 XServes a few months ago? Well, when you do something like that, you have to rerun and resubmit the benchmark. Va. Tech were not able to get the machine back together soon enough to rerun the benchmark in time to make the last list; there's even a big caveat about it on the Top 500 home page [top500.org].

        (It's also not clear that the original version of the Va. Tech machine ever did anything other than run that benchmark, but that's another matter.)

        --Troy
    • I know that when the Mac G5 Cluster was developed they claimed tremendous speed, but when the sustained rate was calculated, it turned out to be much lower ...

      Speaking of the Big Mac (lame name), where is it now? I don't see it in the list [top500.org] and the the news page on their site doesn't list it as coming back on the 24th edition in November.
      • they had to take down their system while they upgraded to Duel 2.3 GHz Xserves with ECC memory. the system is up and running now, and last month they were running simulations for the military.
    • I know that when the Mac G5 Cluster was developed they claimed tremendous speed, but when the sustained rate was calculated, it turned out to be much lower ...

      From what I know when Virginia Tech's G5 cluster's results were submitted they looked OK. You can see the results here [top500.org]. The measured result was about 58% of the theoretical peak which is on par with other similarly configured systems. Now, why Tech spent $5 mil and rushed to get this system put together for the November 2003 top500 list, and then
    • The G5 Cluster was about power for money, not raw power. It's relatively cheap to make a supercomputer from G5 Macs connected with Xgrid.
  • by gutterandthestars ( 782754 ) on Wednesday September 29, 2004 @07:59AM (#10382451)
    98% of posts will be 0.4 standard deviations away from one of the following:

    0. "fist pr0st!!!!!111~"
    1. "Imagine a beowulf cluster of these!"
    2. "But does it run Linux?"
    3. "In Soviet Russia, SPEED RECORD SETS YUO!"
    4. "1. Earth Simulator: 38.56 TFlops. 2. BlueGene/L36.01 TFlops. 3. ... 4. PROFIT!!!
    5. "I for one, welcome our supercomputer overlords."
    6. "Do either of the supercomputers run BSD? BSD is dying."
    7. "I didn't have enough time to read the article, but..."
  • Huh? (Score:3, Interesting)

    by attam ( 806532 ) on Wednesday September 29, 2004 @08:02AM (#10382476)
    From TFA:
    the Blue Gene/L system next year with 130,000 processors and 64 racks, half a tennis court in size.

    The prototype for which IBM claimed the speed record is located in Rochester, Minn., has 16,250 processors and takes up eight racks of space.

    So does this mean the finished product, with almost 10x as many procs will be much faster still? Or am I reading this wrong?
    • Re:Huh? (Score:5, Informative)

      by CriX ( 628429 ) on Wednesday September 29, 2004 @08:16AM (#10382574)
      YUP.

      "About IBM's Blue Gene Supercomputing Project Blue Gene is an IBM supercomputing project dedicated to building a new family of supercomputers optimized for bandwidth, scalability and the ability to handle large amounts of data while consuming a fraction of the power and floor space required by today's fastest systems. The full Blue Gene/L machine is being built for the Lawrence Livermore National Laboratory in California, and will have a peak speed of 360 teraflops. When completed in 2005, IBM expects Blue Gene/L to lead the Top500 supercomputer list. A second Blue Gene/L machine is planned for ASTRON, a leading astronomy organization in the Netherlands. IBM and its partners are currently exploring a growing list of applications including hydrodynamics, quantum chemistry, molecular dynamics, climate modeling and financial modeling."

      -from the IBM website

    • Re:Huh? (Score:3, Funny)

      Just mean next year you will see this kind of announce on eBay:

      *** NIB * BlueGene prototype ***

    • by joib ( 70841 )

      So does this mean the finished product, with almost 10x as many procs will be much faster still?


      Yes. Assuming the machine scales linearly (might be possible with linpack) the real thing will have a linpack score 8x than that of the earth simulator.

      Impressive yes, but keep in mind that the earth simulator was 5x faster than the next fastest machine when it was introduced in 2002.
  • by erick99 ( 743982 ) <homerun@gmail.com> on Wednesday September 29, 2004 @08:05AM (#10382497)
    From the article:

    Unlike the more specialized architecture of the Japanese supercomputer, IBM's BlueGene/L uses a derivative of commercially available off-the-shelf processors. It also uses an unusually large number of them. The resulting computer is smaller and cooler than other supercomputers, reducing its running costs, said Hirschfeld. He did not have a dollar figure for how much lower Blue Gene's costs will be than other supercomputers.

    This is the most interesting part of the article to me. Makers of supercomputers are going to go back and forth for the speed record. However, holding the speed record with off the shelf components seems like a separate achievement in and of itself. The article did mention, however, that the IBM system is not as capable as other supercomputers.

    • Off the shelf is used loosely here. The BlueGene processors are indeed custom, but they happen to be based on the PowerPC 440 processors. That is, if you go buy a machine with a PowerPC 440 cpu, it's not exactly the same as what's in BlueGene. It is mostly the same though. What is pretty interesting is that each of the cpus is pretty paltry (the DD1 chips run at 500MHz, and the DD2 chips run at 700MHz), but the overall architecture seems to scale pretty well.
      TZ
  • by onetrueking ( 413507 ) on Wednesday September 29, 2004 @08:11AM (#10382536)
    From the NYTime article:

    "The new system is notable because it packs its computing power much more densely than other large-scale computing systems. BlueGene/L is one-hundredth the physical size of the Earth Simulator and consumes one twenty-eighth the power per computation, the company said."

    1/100th the size and 1/28th the power. Now if that isn't a beautiful thing, I don't know what is.
  • Here comes the best chess player you've ever seen!
  • More detail (Score:3, Informative)

    by erick99 ( 743982 ) <homerun@gmail.com> on Wednesday September 29, 2004 @08:19AM (#10382601)
    For a great deal of detail about this system surf over to this pdf [sc-2002.org]
    • If I had mod points I would let you have one but I don't so instead I must write this note. Thank you for specifying that your link was to a PDF. It's terrible having to wait for ages acrobat to load because you clicked a PDF in your web browser instead of just loading it in acrobat.
    • Really great article. Of note:

      The desired MTBF of the system is at least 10 days.

      With 65k processors it makes sense that the MTBF would be small, but wow. Of course, IBM has accounted for this in the design: the system is able to automatically recover from a failed node etc.
  • What for? (Score:3, Funny)

    by RCulpepper ( 99864 ) on Wednesday September 29, 2004 @08:26AM (#10382664)
    From the Washington Post article:

    "IBM's new system nudges past a nearly three-year-old computer speed record of 35.86 "teraflops," or trillions of calculations per second, with a working speed of 36.01 teraflops....The current record-holder, known as the Earth Simulator, is a supercomputer in Yokohama, Japan, designed to simulate earthquakes."

    Won't it be great when IBM announces that they built Blue Gene to simulate Japanese earthquakes? Neener neener.
  • Smart machines (Score:5, Interesting)

    by fionbio ( 799217 ) on Wednesday September 29, 2004 @08:29AM (#10382679)
    I've heard that the neural network of human brain has calculation speed of 4.4 TFLOPS. How soon these machines will start to THINK? Seems like what we need now is just more storage capacity and some well-written "thinking" software...
    • Re:Smart machines (Score:5, Interesting)

      by Doesn't_Comment_Code ( 692510 ) on Wednesday September 29, 2004 @08:59AM (#10382931)
      You're getting into some pretty deep issues now. Can a computer ever think? How would we know if it was thinking? At what point does the computer start thinking instead of just following instructions. No matter how complex it's instructions are or how fast it executes them, isn't it still just following instructions? What about us? Are we just following instructions?

      Timeout-- my head hurts.

      Which brings me to my next point. If computer ever could think, it would eventually start to think about how it thinks... And then it would overheat or explode.
    • Actually, a quick Googling [google.com] reveals that the popular estimate for the calculation power of the human brain seems to be around 100 teraflops, but at least we're in the ballpark now. Maybe they could use it to simulate the brain of a lesser animal, like a raccoon or a Republican.
  • by flaming-opus ( 8186 ) on Wednesday September 29, 2004 @08:35AM (#10382725)
    I'll be very interested in seeing how well this thing performs on benchmarks other than linpack.

    Blue Gene is a very interesting design in so much as it uses IBM's 32-bit powerpc cores, normally used for embeded applications. They put 2 cores on a die, and integrated a memory controller, as well as the 4 different interconnect networks. The cores are only clocked at about 800mhz, and are thus pretty wimpy individually. However, that can be good. Since the processor cores are quite modest, the ratio of memory bandwidth to CPU flops is quite high. Similarly the ratio of interconnect bandwidth to CPU flops is also very high. Thus the CPUs should run very efficiently on problems that will parallelize to thousands of cpus. Some problems, on the other hand, will perform terribly. I expect a lot of this system's performance depends on the scalability of the system software, and the compilers / libraries.

    That said, the earth simulator is also really good at some applications, and not so good at others. Instead of 16,000 small CPUs, it uses 5000 massive vector CPUs. Each is clocked at only 500mhz, but has 8 parallel execution pipes, and about 50GBytes/sec of memory bandwidth. Problems that don't vectorize run through the very modest 500mhz scalar unit.

    Earth simulator has realized a large percent of it's theoretical peak performance on real world simulations (often up to 50%) while most large systems approach (10%). I'm looking forward to see how well utilized Blue Gene is. Earth simulator was a direct descendant from NEC's sx-series supercomputers, which have a 20 year lineage. Blue Gene is a radical departure from IBM's regular HPC product offerings, and uses a new microkernel OS rather than clustered AIX nodes. I imagine there will be some stutter-steps in the early days of this new product, which will undoubtedly work themselves out over time.

    Great work IBM.
    • by joib ( 70841 )

      I expect a lot of this system's performance depends on the scalability of the system software, and the compilers / libraries.


      The blue gene is an all out MPI machine. System software scalability is not that crucial, since every compute node kernel only controls 2 cpu:s. With this modest number of cpu:s per node, I'd guess it doesn't require any extreme trickery from the scalability point of view to achieve near hardware performance.

      Software-wise, all the scalability problems lie in the design of the app
      • That's only sort-of true. Blue Gene, like asci-red, cri t3e, paragon, etc use a microkernel OS to control the compute nodes. This is basically a couple of network stacks that allow the application to use the interconnect network, and some hooks for the larger OS which runs on dedicated OS-nodes. The microkernel mostly just gets out of the way, and lets the application run balls-out on the compute nodes. Blue Gene was even designed so cleverly that MPI barriers and all-reduce are implemented as part of the i
        • Ok, you clearly know what you're talking about, a welcome change here on /. :)

          I thought your comment about scalability of the system software meant the traditional scalability woes of SMP systems, complicated lock hierarchies and the like. Clearly the problems faced by MPP systems are different, and not as limiting since we can build MPP systems with about 2 orders of magnitude more CPU:s than shared memory systems.


          But then the application does something like write(file, offset, &buffer). That can'
  • 36.01 What ? (Score:3, Informative)

    by JohnHegarty ( 453016 ) on Wednesday September 29, 2004 @08:35AM (#10382727) Homepage
    http://foldoc.doc.ic.ac.uk/foldoc/foldoc.cgi?query =teraflop
  • With ~16,000 processors now, and over 130,000 when it goes into production, getting all those CPUs to talk to one another is quite a challenge. Did they use infiniband? Or a proprietary interconnect, perhaps?

    Chip H.
    • by joib ( 70841 ) on Wednesday September 29, 2004 @08:43AM (#10382786)

      Did they use infiniband? Or a proprietary interconnect, perhaps?


      Proprietary. Actually, it has 3 networks, one mesh network for point-to-point communication, one tree network for collective communication and a service network for disk i/o, control, health monitoring etc. The service network is ethernet IIRC, the other two are custom.
  • The Virginia Tech Supercomputer (take 2) is due to be clocked soon, and its also a huge off-the-shelf system. I'd like to see how they compare.

    Also, I'll be big money its already been used for gaming. What college studeny could resist?
  • Seen partial towers (Score:2, Informative)

    by gaylenek ( 456348 )
    I've been down in the basement of the building, see a few of the towers 1/2 loaded (at that time), along with the massive cooling system that was added to the building to keep those racks workings. Lift up a raised floor panel and the 95 LBS of me will get lifted off the ground (or so it feels).

    Sadly, all 64 racks will never be in Roch, just not enough space.

    Actually, StarTribune [startribune.com] has one (crappy) pic of some towers.
  • How long does it take it to run an infinite loop?
  • Comment removed based on user account deletion
    • If I remember correctly from the presentations I've seen on it, they will be using Linux as on OS on the "front" nodes ( the ones that users log into to start jobs, compile, etc. ), and a lightweight kernel ( proprietary ) specifically designed for Blue Gene architecture on the rest of the nodes.
  • What in hell is the Blue Gene logo [ibm.com] all about??


    blakespot

    • Its a protein. Like in gene.
      You know, before they redirected its usage to designing nuclear weapons, blue gene was supposed to do life-science calculation stuff...
  • People are fixated on the Earth Simulator because of what it does, not how fast it does it. Geeks care about the specs, but the normals care that it's fast enough to model the weather, which is increasingly destructive and scary. The rest of these supercomputers are used for finding oil and "perfecting" weapons, not nearly as inspiring.
    • It's not fast enough to accurately model the weather, though, since for that you need a system which approaches the complexity of the system being modeled. Projections only last a couple of days with much reliability before reality rears its ugly head and reminds you that you don't have enough processing power. Being able to accurately model weather would be a good thing in most ways.
  • Some google references mention 10-15 watts per node, giving about 250 kilowatts for the 16K node test machine. They were trying to stay under two megawatts for the full blown 130K, 360TF machine planned in a couple years. That is the site power capacity.
  • by museumpeace ( 735109 ) on Wednesday September 29, 2004 @10:17AM (#10383652) Journal
    comes from building hardware for a specific task. Unfortunately most of you can't access this little bit of nerd heaven but some incredibly cool hardware architectures are being described at the High Performance Embedded Computing [mit.edu] conference. Sky [sky.com] and Mercury [mc.com] have some of their hottest new designs here. How about a machine that can do a 256 mega-sample FFT [dilloneng.com] in real time?, or a self configuring supercomputer on a chip [stormingmedia.us]? Of course most of these tricks will never escape the lab except for the speed-ups for rendering engines...one place where gamers and the DOD are driving technology in a dead heat race with lots of winners. Besides, in a few months, something [bjorn3d.com] will come along that will go even faster than blue gene.
  • by Chief Typist ( 110285 ) on Wednesday September 29, 2004 @10:33AM (#10383874) Homepage
    So how long will it take before a Mac rumor site predicts that this CPU will be in the next PowerBook?

    -ch
  • How many WAV-to-MP3/second is this?

  • IBM vs. SGI (Score:2, Interesting)

    by nboscia ( 91058 ) *
    I wonder how this compares to the one NASA is building [sgi.com], which is being collaborated with Intel and SGI. Since you can't base performance simply on the number of processors, it should be interesting.
  • The bluegene may be faster, but the Earth Simulator sure looks [jamstec.go.jp] cooler. Obviously, this proves that the Earth Simulator is the superior (superer?) computer.

The truth of a proposition has nothing to do with its credibility. And vice versa.

Working...