Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware

Two Directions for the Future of Supercomputing 148

aarondsouza writes: "The NY Times (registration required, mumble... mutter...) has this story on two different directions being taken in the supercomputing community. The Los Alamos labs have a couple of new toys. One built for raw numbercrunching speed, and the other for efficiency. The article has interesting numbers on the performance/price (price in the power consumption and maintenance sense) ratios for the two machines. As an aside... 'Deep Blue', 'Green Blade' ... wonder what Google Sets would think of that..."
This discussion has been archived. No new comments can be posted.

Two Directions for the Future of Supercomputing

Comments Filter:
  • What could be a better chance to market my grid computing sites list [cyberian.org]. That's how truly massive computational tasks are to be done more and more in the coming years.
    • by grid geek ( 532440 ) on Tuesday June 25, 2002 @06:08AM (#3761533) Homepage
      Rubbish. Talking as someone doing a PhD in the subject, Grid computing is *not* the answer to every high performance computing problem.

      Latency issues are still going to be there and which would make Grid environments unsuitable for the majority of simulations. You can't do nuclear event simulations effectively if you have a multiple second delay in communicating between processors which you get in Grids.


      On the other hand Grids do have several advantages in terms of providing similar TFLOPS for a much lower price, by using several geographically seperated systems you give access to more researchers and research in this area has a lot of practical spin offs in the future.

      • > Rubbish.

        So are you talking rubbish as well? Because you seem to share the completely same opinion as I do. :) No offense intended.
        • So are you talking rubbish as well?

          Probably 8*), no offense taken and I hope, none given!

          However this is a problem, people sometimes don't realise that there are different types of supercomputing problem which need to be approached in different ways. (I assume you do but some posters may not).

          There are some problems, such as Seti@Home [berkeley.edu] which are suitable for computation in a widely distributed environment. Each SETI Unit doesn't rely on any other to be analysed so it doesn't matter if it takes a long time to communicate between processors (if at all). Others, such as weather simulations require high speed, high bandwidth communication between each processor. In these cases even a Beowulf cluster is going to have far too little bandwith to be useful. This was the point I was trying to put accross. Grids are great for many things, however I'd still want a supercomputer for some problems.

          I see it as a parallel evolution between the two methods. Supercomputers to give us the hardware, Grid systems to provide the software & make use of the hardware when it becomes affordable.

  • I'd be VER interested to see comparison numbers between the two machines.... Which one actually delivers more bang for the buck? because, I think, that efficient would imply getting more power out of less resources, right???

    On a work per dollar bassis, which one actually delivers more??

    (screw the NY times, I'll glean the data from re-posts) (and screw all of you first-post weenies.... nothing but garbage 'tween your ears)
    • bang for your buck is important but no matter how many economy machines you produce you can NEVER get up to what a Deep Blue can... so really the bet option is to go with both...
      • A few interesting questions have just ocured to Me....

        How do these megawatt sucking super-machines compare to distributed computing??

        Compare, say, the folding@home project, and deep blue... Which has done / does more work/calculations per hour?

        How long would it take deep blue to take the #1 spot on the folding@home rankings??

        and how great would the potential benefits be, to the human race as a whole, if these monster computers did some folding in their idle time???

        • Re:Interesting.... (Score:3, Insightful)

          by Dilbert_ ( 17488 )
          I think if you add up all the Watts sucked up by the myriads of smaller PC's in those projects, you'd get a respectable amount of electricity too... Imagine just the inefficiency of having monitor screens on all these machines sucking up power alone.
  • At Los Alamos, Two Visions of Supercomputing
    By GEORGE JOHNSON

    Moore's Law holds that the number of transistors on a microprocessor -- the brain of a modern computer -- doubles about every 18 months, causing the speed of its calculations to soar. But there is a downside to this oft-repeated tale of technological progress: the heat produced by the chip also increases exponentially, threatening a self-inflicted meltdown.

    A computer owner in Britain recently dramatized the effect by propping a makeshift dish of aluminum foil above the chip inside his PC and frying an egg for breakfast. (The feat -- cooking time 11 minutes -- was reported in The Register, a British computer industry publication.) By 2010, scientists predict, a single chip may hold more than a billion transistors, shedding 1,000 watts of thermal energy -- far more heat per square inch than a nuclear reactor.

    The comparison seems particularly apt at Los Alamos National Laboratory in northern New Mexico, which has two powerful new computers, Q and Green Destiny. Both achieve high calculating speeds by yoking together webs of commercially available processors. But while the energy-voracious Q was designed to be as fast as possible, Green Destiny was built for efficiency. Side by side, they exemplify two very different visions of the future of supercomputing.

    Los Alamos showed off the machines last month at a ceremony introducing the laboratory's Nicholas C. Metropolis Center for Modeling and Simulation. Named for a pioneering mathematician in the Manhattan Project, the three-story, 303,000-square-foot structure was built to house Q, which will be one of the world's two largest computers (the other is in Japan). Visitors approaching the imposing structure might mistake it for a power generating plant, its row of cooling towers spewing the heat of computation into the sky.

    Supercomputing is an energy-intensive process, and Q (the name is meant to evoke both the dimension-hopping Star Trek alien and the gadget-making wizard in the James Bond thrillers) is rated at 30 teraops, meaning that it can perform as many as 30 trillion calculations a second. (The measure of choice used to be the teraflop, for "trillion floating-point operations," but no one wants to think of a supercomputer as flopping trillions of times a second.)

    Armed with all this computing power, Q's keepers plan to take on what for the Energy Department, anyway, is the Holy Grail of supercomputing: a full-scale, three-dimensional simulation of the physics involved in a nuclear explosion.

    "Obviously with the various treaties and rules and regulations, we can't set one of these off anymore," said Chris Kemper, deputy leader of the laboratory's computing, communications and networking division. "In the past we could test in Nevada and see if theory matched reality. Now we have do to it with simulations."

    While decidedly more benign than a real explosion, Q's artificial blasts -- described as testing "in silico" -- have their own environmental impact. When fully up and running later this year, the computer, which will occupy half an acre of floor space, will draw three megawatts of electricity. Two more megawatts will be consumed by its cooling system. Together, that is enough to provide energy for 5,000 homes.

    And that is just the beginning. Next in line for Los Alamos is a 100-teraops machine. To satisfy its needs, the Metropolis center can be upgraded to provide as much as 30 megawatts -- enough to power a small city.

    That is where Green Destiny comes in. While Q was attracting most of the attention, researchers from a project called Supercomputing in Small Spaces gathered nearby in a cramped, stuffy warehouse to show off their own machine -- a compact, energy-efficient computer whose processors do not even require a cooling fan.

    With a name that sounds like an air freshener or an environmental group (actually it's taken from the mighty sword in "Crouching Tiger, Hidden Dragon"), Green Destiny measures about two by three feet and stands six and a half feet high, the size of a refrigerator.

    Capable of a mere 160 gigaops (billions of operations a second), the machine is no match for Q. But in computational bang for the buck, Green Destiny wins hands down. Though Q will be almost 200 times as fast, it will cost 640 times as much -- $215 million, compared with $335,000 for Green Destiny. And that does not count housing expenses -- the $93 million Metropolis center that provides the temperature-controlled, dust-free environment Q demands.

    Green Destiny is not so picky. It hums away contentedly next to piles of cardboard boxes and computer parts. More important, while Q and its cooling system will consume five megawatts of electrical power, Green Destiny draws just a thousandth of that -- five kilowatts. Even if it were expanded, as it theoretically could be, to make a 30-teraops machine (picture a hotel meeting room crammed full of refrigerators), it would still draw only about a megawatt.

    "Bigger and faster machines simply aren't good enough anymore," said Dr. Wu-Chung Feng, the leader of the project. The time has come, he said, to question the doctrine of "performance at any cost."

    The issue is not just ecological. The more power a computer consumes, the hotter it gets. Raise the operating temperature 18 degrees Fahrenheit, Dr. Feng said, and the reliability is cut in half. Pushing the extremes of calculational speed, Q is expected to run in sprints for just a few hours before it requires rebooting. A smaller version of Green Destiny, called Metablade, has been operating in the warehouse since last fall, requiring no special attention.

    "There are two paths now for supercomputing," Dr. Feng said. "While technically feasible, following Moore's Law may be the wrong way to go with respect to reliability, efficiency of power use and efficiency of space. We're not saying this is a replacement for a machine like Q but that we need to look in this direction."

    The heat problem is nothing new. In taking computation to the limit, scientists constantly consider the trade-off between speed and efficiency. I.B.M.'s Blue Gene project, for example, is working on energy-efficient supercomputers to run simulations in molecular biology and other sciences.

    "All of us who are in this game are busy learning how to run these big machines," said Dr. Mike Levine, a scientific director at the Pittsburgh Supercomputing Center and a physics professor at Carnegie Mellon University. A project like Green Destiny is "a good way to get people's attention," he said, "but it is only the first step in solving the problem."

    Green Destiny belongs to a class of makeshift supercomputers called Beowulf clusters. Named for the monster-slaying hero in the eighth-century Old English epic, the machines are made by stringing together off-the-shelf PC's into networks, generally communicating via Ethernet -- the same technology used in home and office networking. What results is supercomputing for the masses -- or, in any case, for those whose operating budgets are in the range of tens or hundreds of thousands of dollars rather than the hundreds of millions required for Q.

    Dr. Feng's team, which also includes Dr. Michael S. Warren and Eric H. Weigle, began with a similar approach. But while traditional Beowulfs are built from Pentium chips and other ordinary processors, Green Destiny uses a special low-power variety intended for laptop computers.

    A chip's computing power is ordinarily derived from complex circuits packed with millions of invisibly tiny transistors. The simpler Transmeta chips eliminate much of this energy-demanding hardware by performing important functions using software instead -- instructions coded in the chip's memory. Each chip is mounted along with other components on a small chassis, called a blade. Stack the blades into a tower and you have a Bladed Beowulf, in which the focus is on efficiency rather than raw unadulterated power.

    The method has its limitations. A computer's power depends not just on the speed of its processors but on how fast they can cooperate with one another. Linked by high-speed fiber-optical cable, Q's many subsections, or nodes, exchange data at a rate as high as 6.3 gigabits a second. Green Destiny's nodes are limited to 100-megabit Ethernet.

    The tightly knit communication used by Q is crucial for the intense computations involved in modeling nuclear tests. A weapons simulation recently run on the Accelerated Strategic Computing Initiative's ASCI White supercomputer at Lawrence Livermore National Laboratory in California took four months of continuous calculating time -- the equivalent of operating a high-end personal computer 24 hours a day for more than 750 years.

    Dr. Feng has looked into upgrading Green Destiny to gigabit Ethernet, which seems destined to become the marketplace standard. But with current technology that would require more energy consumption, erasing the machine's primary advantage.

    For now, a more direct competitor may be the traditional Beowulfs with their clusters of higher-powered chips. Though they are cheaper and faster, they consume more energy, take up more space, and are more prone to failure. In the long run, Dr. Feng suggests, an efficient machine like Green Destiny might actually perform longer chains of sustained calculations.

    At some point, in any case, the current style of supercomputing is bound to falter, succumbing to its own heat. Then, Dr. Feng hopes, something like the Bladed Beowulfs may serve as "the foundation for the supercomputer of 2010."

    Meanwhile, the computational arms race shows no signs of slowing down. Half of the computing floor at the Metropolis Center has been left empty for expansion. And ground was broken this spring at Lawrence Livermore for a new Terascale Simulation Facility. It is designed to hold two 100-teraops machines.

  • The photo in the article talking about Green Destiny shows RLX [rlx.com] shelves in the background.

    -ez
    • by Anonymous Coward
      That's because it is. :) But RLX initially intended their hardware for web hosting applications. Green Destiny just integrates it differently (kind of like typical PCs are meant for non-supercomputer use, but Beowulf clusters integrate them differently).
    • It IS a cluster of RLX Transmeta blades, each containing a 667MHz processor and 640MB memory connected by 100Mbit Ethernet. It's not meant to compete with "Q". It's simply a great departmental or workgroup cluster. However, its' efficiency suggests it might be a concept worth exploring for future cluster supercomputing architectures. Hippster
  • all i see (Score:3, Informative)

    by martissimo ( 515886 ) on Tuesday June 25, 2002 @03:37AM (#3761327)
    is that the writer has noticed that is cheaper to run a beowulf than to run a true supercomputer, but in return for the price you sacrifice performance...

    though i did find the line about Q needing rebooted every few hours kinda funny, i mean when are they gonna learn to stop installing Windows on a 100 million dollar supercomouter ;)
    • I dont thing they are running WIndows - If they were, it would need to be rebooted every millisecond at that speed!
    • Clearly what they need to do is build a Beowulf cluster of Q machines, so that heartbeat can redirect the computations during the reboot of one of the machines...

      Of course, that would imply that there was always a machine coming out of reboot...

      More seriously, how could one figure out the optimum strength/speed/Flops/? for a node if one were building a cluster of identical machines? It seems clear that up to some size it's better to stick everything in the same box, and beyond that it makes more sense to cluster them. But how would one decide where to draw the line? (I was considering this as an technical problem, for the moment ignoring costs [which would often just say: Grab a bunch of used machines and slap them together].)
  • I have an idea (Score:2, Redundant)

    by dimator ( 71399 )
    We all know that NYtimes requires registration by now. Can we skip the damn warning every time there's a story there?

    • Re:I have an idea (Score:3, Insightful)

      by oever ( 233119 )
      Or better, give a userid and passwd for a NYTimes account.

      I'm sure it's legal. It's like sharing/swapping discount passes at the supermarket.
      • That doesn't work, because whenever someone gives a good one out for everybody to use (like username slashdot, password slashdot), someone comes along and says "Cool, I can get the "slashdot" account for my own personal use, alls I gotta do is change the password!"
        • Actually, they don't allow the login name cypherpunks anymore. Nor do they allow cypherpunks1, cypherpunks2, cypherpunks99, cypherpunks742, cypherpunks999, ... They do allow cypherpunks followed by four digits, but some of those may be taken.
  • (my emphasis)

    • Deep Blue
    • Stand Away
    • Solitaer
    • Master Mind
    • Floor planing
    • Reaching Horizons
    • Freedom Call
    • DEEP RED
    • Queen Of The Night
  • by selderrr ( 523988 ) on Tuesday June 25, 2002 @03:39AM (#3761330) Journal
    "The NY Times (registration required, mumble... mutter..."

    next time you put "registration" between brackets, followed by two words, you better make sure that those two words are userID and paswd !

    I really wonder what the NYT logfile-monkeys think when they see a zillion 'mumble/mutter' login attempts...
  • The article said something to the effect that in 10 years or so a chip will come with a billion transistor and suck up a full 1000 watts. Ok how long until I can say my desktop uses 1.21Gwatts?
    • The article said something to the effect that in 10 years or so a chip will come with a billion transistor and suck up a full 1000 watts.

      From the article: By 2010, scientists predict, a single chip may hold more than a billion transistors, shedding 1,000 watts of thermal energy -- far more heat per square inch than a nuclear reactor.

      Power consumption and heat dissipation per unit area are two different things.

  • Google Sets: [google.com]
    Predicted Items
    Deep Blue
    Stand Away
    Solitaer
    Floor planing
    Master Mind
    Reaching Horizons
    Freedom Call
    DEEP RED
    etc
    Queen Of The Night
    Painkiller
    Today's Technology
    Recent developments in AI
    His literary influences
    Angels Cry
    Never Understand
    Red
    After rain
    The Renju International Federation
    Game of Go Ring
    Gateway Inc
    Dell Computer Corp
    IBM
    Carry On
    The future of AI
    Food Chain Fish
    Deep Yellow
    Violet
    ZITO
    Forest Green
  • by Howzer ( 580315 ) <grabshot&hotmail,com> on Tuesday June 25, 2002 @03:47AM (#3761353) Homepage Journal
    Since both the designs mentioned in the article seem to be fully scalable, we come back to the age-old lowest common denominator of power:

    How many people can hold the handle that turns the crank? Or in modern terms, how much juice can you reasonably throw at these beautiful monsters!?

    So with this in mind, I don't think it's too off-topic to mention this article [sfgate.com] which talks about the gutting of funding for fuel cells. Or this student research paper site [umich.edu] which talks about the inherent economy of different sources of energy in various terms. (Warning! They are pro-nuclear, so YMMV!) Also, if you are interested in where this topic takes you you should stop off here [doe.gov] to follow up on whatever takes your fancy as far as energy production goes. They've got a veritable mountain of info. Check out their hydrogen economy [nrel.gov] stuff.

    Whoever thought up the names of the two machines needs to get a grant or something! Green Destiny, mmmmmmm! Q, grooowwwl!

  • by GCP ( 122438 ) on Tuesday June 25, 2002 @03:58AM (#3761369)
    Maybe we could dispense with this sort of nonsense everytime the NY Times is referred to. If people know what the "mumble...mutter" refers to, they don't need the note. If they don't, then the note doesn't help.


  • Well, there has always been need for differnt
    supercomputers. There is numbercrunchers,
    vector machines etc. You have to get the
    machine that suites your needs. Thats all.
    Like som machines need HUGE datasets to work
    on but its not a complex calucation eatch
    cycle. But some other are very complex
    calucations on small datasets (where a beowulf
    works wonders). But you cant use a
    linuxkluster THAT efficent if you have to move
    around several hundred gigabyte of data around
    the nodes.

    On a sidenote. Why is "blah blah ny times had
    registration we know it" informative? I say
    offtopic!

  • by Anonymous Coward

    The Q machine is a big cluster of Alpha servers with some kind of fast interconnect. The Green Destiny cluster seems to be a Transmeta blade cluster with some kind of commodity interconnect. Both are basically big collections of independent CPU's talking over some kind of fast network connection.
    They are distributed memory clusters, each machine has it's own memory and they interact through a fast network.

    There are other architectures where you have all the processors sharing the same memory, and they communicate over the memory bus. Kind of like the difference between 16 PC's talking over gigabit ethernet, and a 16 processor Sun box.
    At another level there is the whole vector vs. scalar architecture. The japanese have a 36 teraflop vector supercomputer that leaves our machines in the dust.

    The article is misleading because the machines described are at different ends of a price spectrum, not at differents ends of an architectural spectrum. You aren't looking at different approaches, you're just looking at different price points.

  • A beowulf cluster of... of... *collapses from sudden heart attack*
  • Moores Law (Score:2, Informative)

    by wiZd0m ( 192990 )
    Moore's Law holds that the number of transistors on a microprocessor -- the brain of a modern computer -- doubles about every 18 months, causing the speed of its calculations to soar.

    This is a myth for the non techie, it's transistor density that doubles every 18 months, not the number of transistors.
  • From the article:

    "Armed with all this computing power, Q's keepers plan to take on what for the Energy Department, anyway, is the Holy Grail of supercomputing: a full-scale, three-dimensional simulation of the physics involved in a nuclear explosion."

    Come on, for pete's sake, can't we do better stuff with this than simulate the physics in a nuclear explosion? Honestly, who cares about that. We all know it's bad, real bad. Move on to something near and dear to a lot of us out there...cancer, AIDS, Heart Disease,etc.

    This is probably an unpopular view, but damn, enough destruction already.
    • Heh. You forget that none of the causes you mention involve playing with huge computers. A lot of these machines are, however, being used to do protein-folding simulations- Blue Gene, or the PNNL's new machine (I think). I'm fine with simulating nukes, because it means fewer Pacific islands get slagged. Protein folding, on the other hand, is often something of a joke- some people get very interesting results that tell us a lot about biophysics, but absolutely nothing whatsoever indicates that we'll be able to do accurate structure prediction anytime soon. It's amazing how many people think completely computerized drug design is right around the corner.

      From what I've read, a real useful advance in computational biology would be to automate building and refining of protein structures from crystallography. It's just not as sexy, though.
      • > You forget that none of the causes you mention
        > involve playing with huge computers.

        Sure they do. It's called "rational drug design".
        I spent a year of my life modelling networks of
        heart cells at an electrochemically detailed level.
        That line of work, if it was funded and pursued,
        could incrementally resolve the various causes of
        heart failure, and provide a wide array of mech-
        anisms to detect, prevent, or treat the various
        forms of arrythmia. It requires a lot of
        computing power to similuate the behaviour of
        networks of billions of cells at a fine scale.
    • Honestly? If you have nuclear devices, you'd better be damn sure that they'll work, and you'd also better be damn sure that every potential target believes that a) they'll work, and b) if pushed hard enough, we'll use them. If either of the latter two conditions doesn't hold, then there's a risk of a) finding out that they fizzled (bad), or b) using them in a situation that could have been avoided if sufficient fear had already existed.

      As to cancer and HIV/AIDS, I don't see how you can put them in even the same ballpark. HIV is linked very, very strongly to easily-avoided behavior -- the use of prostitutes, promiscuity, needle exchange et al. Most cancers victims (sun-related melanomas excepted) are far more innocent of their ailments.
    • Its not all about weapons and destruction....understanding nuclear fusion reactions could be the key to providing huge, clean sources of energy for the world.


      The reason there are no fusion power plants is because we don't know how to handle the vast amounts of energy released in a fusion reaction. If it can be modeled and studied, maybe that energy source can become practical.

      • That's a very good reason, but that's not the way the simulations have been used in the past, or at least not the way that justified the funding. I don't see any reason to expect the future to differ from the past along this axis.

        The current interest in simulating nuclear explosions occurs simultaneously with Bush declaring that tactical nukes might be a good idea, and exploring techniques for using them to destroy hardened underground bunkers (see this month's Scientific American). So I think that's the most likely justifier of the current interest.

        • The Q system was under bid in 2000, the project started construction before Bush was in office, the goal is to simulate a nuclear explosion in order to help maintain our current (let me repeat that, current) stockpile without having to detonate one to find out if it still works right. Considering that this reduces nuclear testing, most anti-nuke advocates should be happy.
        • The money isn't coming from the commercial power production interests in the government - they don't do explosions (except in countries like Iraq where it helps oil prices, or occasionally dynamite explosions for echolocation to find underground oil.) It's coming from the people who *do* like explosions. Always has. Some of it's tactical nukes, some of it's strategic nukes, some of it's "when do we have to replace our current warheads?", some of it's "how small/large/well-aimed a bomb can we make?". Bush is heavily involved with the military-industrial complex, but the Labs have been working on this kind of thing for several years, expanding their groups of people who simulate blowing stuff up since they can't do much actual testing. Some of them like explosions because they like blowing stuff up, while others of them like explosions because what else does a nuclear-weapons physicist do in today's market?
      • No, the reason there are no practical fusion power plants is that creating fusion conditions in a plasma using current technology uses more power than can be extracted.

        Research fusion reactors exist, but they don't produce net power, rather they consume it. That is why they are still research.

        This is pretty much unrelated to the problem of simulating fusion bombs, which uses a different fuel (for the final fusion stage, typically lithium-6 deuteride), the ignition of which involves a whole series of reactions between a large number of different materials, initiated by the detonation of a fission bomb boosted with deuterium or tritium.

        Fusion power plants typically use plasmas, not solids, for their fuel, and are ignited by confinement and heating. The amount of energy released is pretty much self-regulating, since the plasma will tend to lose confinement and burn less if it gets too hot.

        Power generation and weapons have very different design goals: power plants tend to be big, stay still, and produce large amounts of power for a long time, connected to a power transmission system, with human operators nearby. Weapons need to be moved quickly to a target (i.e. be light, compact, and robust), and generate a huge amount of power in a very brief time, with care taken to preserve the safety of the weapon's handlers, and not much done to preserve the safety of any people at the target.

        (Of course, the two fields are not totally separate; however, my main point is that they typically involve very different computer simulations).
    • The ASCI supercomputers simulate nuclear explosions because testing a real nuclear weapon are forbidden by the Comprehensive Test Ban Treaty. For various reasons, maintaining a nuclear stockpile requires a certain amount of testing-- either for safety or to ensure that such weapons actually work-- duds apparently reduce deterrent value. Thus the ASCI project.

      If there was no nuclear stockpile to maintain, Los Alamos probably would not be host to the "world's fastest supercomputers." Not exactly a fair trade...
  • by Zo0ok ( 209803 ) on Tuesday June 25, 2002 @05:26AM (#3761487) Homepage
    Ok, Q is rated at 30 teraops at 5 MW. Green Destiny is capable of 160 gigaflops at 5 kW.

    This means that the power efficiency difference is just a mere factor of 5. The problem with supercomputing is of course scaling and interconnecting the cpu... The author argues that the Green Destiny is "not so picky", and "hums away contentedly next to piles of cardboard boxes and computer parts" while Q requires special buildings and monstrous cooling installations. Yeah, so what, it is a much smaller machine.

    Of course it is easier to build a smaller machine than a large machine. I would say that despite the fact that Green Destiny is 0.5% as fast as Q and is designed with power consumption in mind it is just 5 times as efficient.

    Can anyone tell me (or point to a resource) how CPU power consumption depends on transistor size and clock frequency. Will a chip with a given size operating at a given clock frequency require the same amount of power, regardless of the number of transistors in it?
    • Sorry, but an op isn't the same as a flop. Floating point operations usually take longer. So the difference may be closer to a factor of 10. This, of course, depends quite a lot on just what your problem is. If you don't use floating point operations, it could be a much smaller difference. If you use them by the bushel, it could easily be even greater.

  • I wish the media would stop talking about "Moore's Law" (and for that matter, referring to CPUs as the computer's brain).

    NYT:
    Moore's Law holds that the number of transistors on a microprocessor -- the brain of a modern computer -- doubles about every 18 months,

    Well, Gordon observed the exponential increase of transistors on ICs in Electronics, Voume 38 Number 8, April 19, 1965. There were no microprocessors until about 1970! Also, he never mentioned 18 months, thought it can be inferred.

    -Kevin

  • PPC chips are generally more efficient than their x86 counterparts, which makes tham a low tco option for universitys. i wonder what benefit they would have had if they used Motorolas newer mobile G3 processors. and already efficient chip designed for an ultra efficient environment.
    • And your point is? FYI, none of the computers in the article uses x86 chips (ok you may of course argue that the transmeta chips are x86).
      As for Motorola chips giving the best tco, I'm not sure I buy it. Most clusters built on a tight budget these days use athlons or P4:s, often in a 2-way configuration. I'm sure they'd use Motorola chips if they would provide better tco and software was available. :) Any good fortran 95 compilers for the G4? Thought not..
    • If you're trying to get maximum bang for the chip buck, and willing to custom-build boards (which people often are for multi-million-dollar highly-custom machines), digital signal processor chips often have rocking performance for dumb fast applications. For instance, the TI TMS320C6713 [ti.com] can do up to 1800 MFLOPS at 225MHz (probably only 1350 double-precision), while most general-purpose CPUs do less than one MFLOPS per megahertz, and have a reasonable amount of memory and I/O bandwidth. You won't be running off-the-shelf Beowulf on them, but it's not hard to build them into PCI boards or multi-processor PCI boards that you can feed data from a conventional CPU, and they come with compilers and usually other programming environments.
  • by Orp ( 6583 )
    Supercomputing is an energy-intensive process, and Q (the name is meant to evoke both the dimension-hopping Star Trek alien and the gadget-making wizard in the James Bond thrillers) is rated at 30 teraops, meaning that it can perform as many as 30 trillion calculations a second. (The measure of choice used to be the teraflop, for "trillion floating-point operations," but no one wants to think of a supercomputer as flopping trillions of times a second.)

    Since when did Flops turn into ops? It's importatnt to make a distinction between floating point operations and integer operations, right? Seems pretty dumb to me. Or is it a cracker/hacker kind of thing...

    Orp
    • Re:Teraops? (Score:3, Informative)

      by pclminion ( 145572 )
      Since when did Flops turn into ops? It's importatnt to make a distinction between floating point operations and integer operations, right?

      Not really, for two reasons: first, supercomputer CPUs are rigged for floating point, and they do it really fast anyway. Second, a super CPU is so fast compared to RAM that the time difference between an integer op and a floating point op is almost totally amortized into the RAM access time anyway. In other words, computing a float multiplication might be 1.5 times slower than an integer multiplication, but it's still 200 times faster than a RAM access.

      Then you have to work out what exactly you mean by "operation" -- a single multiplication, or a single vector instruction (which might multiply 64 numbers in one shot). It quickly becomes difficult to judge performance based on some "flops" or "ops" number. To figure out performance it's better to just run the real application and see how fast it goes...

    • I guess it's some braindead idea NY Times has got. For benchmarking supercomputers (which of course necessarily doesn't say much about the performance of any particular application), flops is still the way to go. To be more exact in the form of the linpack benchmark, of which results are published at the top500 [top500.org] site. Incidentally, the site has recently been updated (usually 2 times per year).
  • From an economical point of view, maybe the more interesting question to ask is which of these machines is more easily programmable -- especially when the manhours involved in developping software of this complexity typically ends up being a significant fraction of the cost of the entire project. In particular, time spent testing and debugging has got to be especially expensive given the enormous complexity of the problems being solved, which makes me wonder -- do we perhaps need less super and more smart in our big iron? This is pure speculation -- I don't know the specifics of Q or the Green Destiny -- but I'd imagine that a custom machine requires the development of a custom compiler that knows how to take full advantage of the hardware (not to mention building and optimizing the numerical libraries,etc. that the system's users will need). As anyone who's built a compiler can tell you, this is not a trivial task!
    • No. They use standard compilers and tools for their respective architectures (that is Tru64 for Q and I guess Linux for Green Destiny). The applications are programmed using MPI, a FORTRAN/C/C++ message passing API which is an absolute bitch to program.
  • by Anonymous Coward on Tuesday June 25, 2002 @07:57AM (#3761786)
    For those of you who are wondering what they mean by high performance networks inside the Q machine..

    The Q machine utilizes dual-rail Quadrics card according to this [supercomputingonline.com]. Dual rail refers to using two NI cards (each one on a separate 64b/66MHz PCI bus so they can get the most out of the I/O system of the host).

    I hadn't heard of Quadrics so I looked them up. At the web site you find out that they're a switched network that gets 340 MBytes per second between applications and with latencies around 3-5 microseconds. Compare this to 100Mbps ethernet, which gets 10MBytes/s and latencies of 70+ microseconds and you'll understand why the Q machine will run fine grained parallel apps that the green machine won't be able to touch. [quadrics.com]

    Looking a bit through the literature, I noticed that Quadrics uses IEEE 1596.3 for its link signaling (400 MBaud, 10 bit). While they don't say it anywhere, this IEEE standard is the well-known SCI standard (scalable coherent interconnect.. pretty popular in Europe, but the US has been dominated by Myrinet..which I conicidentally use at school)..

    Hope this gives some more detail about the arch..
    • Yes, quadrics is pretty sweet. Also the new PNL Linux/IA64 8.4 Tflop supercomputer which is supposed to come online at the end of the year uses quadrics.

      From reading the beowulf mailing list archives I remember the estimated cost is about $3500-$5000 per node. Compared to about $2000 for Myrinet or Wulfkit.
  • This article succeeds in pointing out the fact that we are reaching the peak computing speeds. As is common konwledge, exponential growth does not continue forever: it reaches a peak. Adelman figured this out when studying encryption algorithms, and his solution was DNA computing. Since then, researchers have been working on such systems. Just this week I saw a lecture at the field museum of natural hisory in Chicago about biocomputing. Chemical reactions will power the future computers, apparently around the year 2010 when the expoential growth curve levels off once and for all. It seems that the future of supercomputing goes much further than supercomputers as we know them. Instead, computers will not resemble current computers at all. They have already built such systems, and a robot that runs off of chemical reactions. This applies to artificial intelligence, also, as these reactions make it easy for systems to learn, as was demonstrated by the robot.

    I wish I had more details to give you. I'll add the lecturer's name when I get home from work. But yeah, this article definitely missed this aspect of research in information technology.
  • by 4of12 ( 97621 ) on Tuesday June 25, 2002 @10:52AM (#3762962) Homepage Journal

    Lower power usage is a good direction for regular computing, too.

    Many have noticed the increasing trend towards laptop computers as a primary computer for people concerned not just about portability, but also about space, electric power and noise issues in their abodes. A noisy tower and 60 lb space-hogging CRT is too uncool. Sleek LCD monitors, minimalist keyboards and no noisy cooling fans is where it's at.

    And, many have noted too, that most CPU power is going to waste these days. Except for a few games and for the server environment, most CPUs spend their time waiting for someone to type in a character into MS Word or click the next link for a browser.

    I think you'll see a shift to more energy efficient CPUs in a big way in a much broader market sector than supercomputing. Namely, desktop client access devices will go this route, too.

  • The Register has a reply of sorts [theregister.co.uk], including a link to its pioneering article on computer assisted cooking technologies [theregister.co.uk]
  • Sounds like Q's going to need it's own nuclear reactor for a battery....

One person's error is another person's data.

Working...