Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Supercomputing Hardware

NSF Announces Supercomputer Grant Winners 82

An anonymous reader writes "The NSF has tentatively announced that the Track 1 leadership class supercomputer will be awarded to the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign. The Track 2 award winner is University of Tennessee-Knoxville and its partners." From the article: "In the first award, the University of Illinois at Urbana-Champaign (UIUC) will receive $208 million over 4.5 years to acquire and make available a petascale computer it calls "Blue Waters," which is 500 times more powerful than today's typical supercomputers. The system is expected to go online in 2011. The second award will fund the deployment and operation of an extremely powerful supercomputer at the University of Tennessee at Knoxville Joint Institute for Computational Science (JICS). The $65 million, 5-year project will include partners at Oak Ridge National Laboratory, the Texas Advanced Computing Center, and the National Center for Atmospheric Research."
This discussion has been archived. No new comments can be posted.

NSF Announces Supercomputer Grant Winners

Comments Filter:
  • by Anonymous Coward
    I will kill you.
    • Re: (Score:3, Funny)

      by locster ( 1140121 )
      Ahh what the heck - A terminator walks into a bar... barman: Why the mimetic polyalloy face? terminator: I'm a T-1000 terminator from the future sent to kill Sarah Conner.
    • Wrong movie. They're building one of the supercomputers in Urbana, Illinois, which means that the HAL Plant [wikipedia.org] must finally be operational, just a few years behind schedule.
    • I won't, but I could make some Deus Ex jokes! Well, I guess I better not.
  • But is this the same award everyone was pissed about because it was going to IBM?

    I'm curious if that was separate, if it was false or fake information or if they changed their minds afterwards?
    • I think that the NSF funds it, the universities get to run the research, and IBM gets to build the machine.

      http://hardware.slashdot.org/article.pl?sid=07/08/ 06/0547226 [slashdot.org]

      -WtC

      *please insert sig for 2 more minutes*
      • The "Blue River" machine is the IBM system that was referred to in the previous submission. Generally, the universities pick vendors to work with, for building a machine of this size and capability is well beyond the capability of any university. Last year, the University of Texas, Austin won a machine working with the vendor Sun.
        • by Orville ( 104680 )
          "The University" won't have the only input into the selection of vendors: both of these projects are intended for a national audience as part of the NSF's Cyberinfrastructure program.

          Of particular interest: the NSF "Track 2" machine is built to complement the capabilities of TeraGrid, and is also being built in a location (Oak Ridge Laboratory) that the DOE is using to build their own Petascale machine.

          http://www.gcn.com/online/vol1_no1/40250-1.html [gcn.com]

          (Both will rely on the readily available, federally admi
  • TGDaily coverage (Score:5, Informative)

    by Anonymous Coward on Wednesday August 08, 2007 @08:07PM (#20164143)
    TG Daily is also covering this with more details [tgdaily.com], and has a picture tour of the current NCSA supercomputer facility [tgdaily.com].
  • I approve (Score:5, Funny)

    by weak* ( 1137369 ) on Wednesday August 08, 2007 @08:12PM (#20164189)
    I'm glad we have the NSF out there supporting the development of faster and faster supercomputers. Pretty soon these machines will be able to locate the correct Sarah Connor in the phone book on the first try.
  • ... ever so true ...
  • Petascale? (Score:1, Flamebait)

    by inKubus ( 199753 )
    Petascale (n) - a unit of measure equivalent to the dead weight of all the cats used to test lipstick by getting it rubbed on their eyes over a calendar year.
    • no no no, petascale just means computers that are typically about 10^15 mm across. I was going to get one but I just don't have the space.
  • wow... (Score:5, Informative)

    by djupedal ( 584558 ) on Wednesday August 08, 2007 @08:26PM (#20164325)
    "Infinite: Bigger than the biggest thing ever and then some. Much bigger than that in fact, really amazingly immense, a totally stunning size, real "wow, that's big," time. Infinity is just so big that by comparison, bigness itself looks really titchy. Gigantic multiplied by colossal multiplied by staggeringly huge is the sort of concept we're trying to get across here."
    • Re: (Score:2, Funny)

      by weak* ( 1137369 )

      "Infinite: Bigger than the biggest thing ever and then some. Much bigger than that in fact, really amazingly immense, a totally stunning size, real "wow, that's big," time. Infinity is just so big that by comparison, bigness itself looks really titchy. Gigantic multiplied by colossal multiplied by staggeringly huge is the sort of concept we're trying to get across here."
      Who gave this guy E?
  • I can't help but wonder whether these super machines are ever actually used to capacity. Since they are housed at universities I suspect that some professor runs two or three stupid little mental exercises on it and then it just sits there, glomps electricity and gathers dust.
    • As far as UIUC is concerned, they have some top notch people who are pushing computing to the limits with long time-scale molecular dynamics runs. And their not doing it to model a few atoms either. Klaus Schulten has been doing some very impressive work on simulating protein dynamics. Take a look at http://www.ks.uiuc.edu/Overview/KS/research.html [uiuc.edu].

      So I have a feeling that the new machines are going to be humming right along
    • Re: (Score:3, Interesting)

      by Entropius ( 188861 )
      My PhD advisor does computational quantum chromodynamics on supercomputers. Quantum chromodynamics is the current theory of the nuclear force. Unfortunately, nobody can actually calculate all that much with it because the math is too hard, but we think it's the right theory because of some symmetry arguments. One of the big challenges at the moment in high-energy theory is to actually see what QCD predicts. Basically the perturbation + renormalization approach that worked so well for quantum electrodynamics
      • We have a new jargon winner, Ladies and Gentlemen! The parent should either be modded informative or bullshit, and I can't tell which, which I'm finding pretty amusing. Mod me toasted.
        • Re: (Score:1, Insightful)

          by Anonymous Coward
          Actually, he uses surprisingly little jargon, considering how much stuff he COULD have thrown in there. (QCD even has its own custom-built supercomputers - see QCDOC) Then there are specific algorithms, approaches, etc. All in all, he sounds like he knows what he's talking about and summed it up pretty well, unsurprisingly since he's a grad student in the field, it seems. ... And he's right, the solution is big fucking computers. :)
      • Re: (Score:2, Informative)

        In fact, a lattice QCD problem was one of the model problems for the Track 1 proposals. Proposers had to "provide a detailed analysis of the anticipated performance of the proposed system on the following set of model problems...A lattice-gauge QCD calculation in which 50 gauge configurations are generated on an 84^3*144 lattice with a lattice spacing of 0.06 fermi, the strange quark mass m_s set to its physical value, and the light quark mass m_l = 0.05*m_s. The target wall-clock time for this calculation

    • Re: (Score:3, Informative)

      by Minter92 ( 148860 )
      I worked as a system engineer on the supercomputers at NCSA from 97 till 2000. Once they are up and stable they are pretty much pushed to the limits. The users are constantly pushing for more procs, more memory, more storage. They'll use every flop they can get.
    • Yes
    • by Kohath ( 38547 )
      I'm wondering why we need government grants to develop computers now. There are many companies that make computers. They'll make a fast one for you if you order it.

      There are also real projects from the NSA and other government branches that need fast computers. Why not a specific grant to develop a computer for a specific application rather than just a "make a fast supercomputer"?

      Should we have grants for "make a tastier fast-food french fry" next?
      • by Orville ( 104680 )
        There are also real projects from the NSA and other government branches that need fast computers. Why not a specific grant to develop a computer for a specific application rather than just a "make a fast supercomputer"?

        Because specific projects usually have a very finite lifetime, and supercomputing resources are terrifically expensive: that's why the NSF has "Cyberinfrastructure" as a major project. Researchers will apply for computer time as part of the normal grant process: current facilities are al
      • A company like Sun or IBM eventually does get paid to make a fast computer... the body that won the grant just gets the right to make the nuts and bolts decisions about what sort of computer, how big, where it lives, etc. They're the one with the experts on these topics on their payroll, not the granting agency or the manufacturers. The reason there's a big grant up for grabs is that the sort of work that gets done on these machines is all paid for through government research grants. For buying computers,
  • Since it's going to be massively parallel, it's only 500 times more powerful than some other computer if it has a beautifully parallelized problem to solve.

    I've programmed computers scientifically for twenty-odd years, and one thing I've found is that massively parallel computers are very difficult to use efficiently, except when you're solving one of the relatively few problems which are obviously parallelizable and yet have interesting results. For example, solving 500 million tic-tac-toe games simultane
    • Yeah sure fine, but first we gotta get vista to boot .
      • You are attempting to use CPU core 153. Cancel or Allow? Allow.

        You are attempting to use CPU core 154. Cancel or Allow? Allow.

        ...

    • Re: (Score:2, Interesting)

      by OldChemist ( 978484 )
      You make a good point. It is now possible to buy a quad core from Dell for about $750 (or less) to play around with. However, as mentioned earlier in this discussion, the work of Klaus Schulten at Illinois is quite instructive. His program NAMD (not another molecular dynamics program) has been designed from the ground up to scale well on many processors. This program does a lot better in this respect than most other md programs out there, although this will no doubt change. So don't despair about this
      • Well...OK, and I know the Schulten work quite well from my time at UIUC. It's certainly impressive in many ways.

        But my suggestion is that fundamental advances will only be made on small, cheap systems. See, a machine like this is so expensive that it's very hard to justify doing blue-sky goofball things on it, which will almost certainly turn out to be dumb ideas. You usually have to write a proposal, and the committee usually won't risk massive resources on an idea that is shaky, speculative as heck, or
      • I'm a grad student in molecular dynamics, and I know Klaus, his code and his work. While you do want good scaling performance with respect to the number of processors, that's not a useful measure of the quality of the implementation compared with other programs. Total throughput of a given system on a given number of processors is a much better indicator. Why? Well if I write code that sucks on one processor, but which gets less-sucky fast when I add more processors to the problem (why this can happen is a
        • Thanks for your comments. You make a good point that scaling alone is not enough, if the code being scaled is inefficient. So you are probably aware that a good thing to check is how many "seconds" of some standard simulation can be done per computing unit. Although Gromacs is supposed to be the fastest gun in the West - and it is on a single processor, it doesn't scale vary well. At least in my experience. This may be due to the kind of machines I have access to. You may want to look at the Gromacs w
    • by Vader82 ( 234990 )
      I'll have to disagree with you there. There are plenty of people who can think completely parallel but are limited by programming languages and all the tedium. For example, the concept for "take this pile of rocks from here and get it there" isn't inherently serial. I'd say 99% of people can grasp that you can do one rock at a time (hands), 10 rocks at a time (shovel), 100 rocks at a time (wheelbarrow) or 10,000 rocks at a time (bulldozer). Maybe thats too simplistic for your tastes, but most concepts a

      • Throwing a bunch of rocks at a single bulldozer is a serial act.

        The parallel problem is to get a fleet of 100 bulldozers or 1000 bulldozers or 10,000 bulldozers simultaneously attacking a pile of rocks so that:

        A) The bulldozers aren't constantly colliding with one another, and

        B) When the bulldozers back off to avoid colliding with one another, they aren't all just sitting around twiddling their thumbs, needlessly burning diesel fuel [not to mention "prevailing" union wages & time value of the loa
      • I'd say you illustrate my point, that thinking "in parallel" is unnatural and difficult.

        First of all, your problem with the rocks what in the business we call trivially parallelizable. You solve it like this:

        int main() {

        int n, result ;

        printf("Enter number of rocks: ") ;
        scanf("%d",&n) ;

        result = move_one_rock() ;

        return(n * result) ;
        }

        Secondly, there are plenty of resources to let you program a trivial thing like unrolling a loop wi

  • NSF (Score:1, Funny)

    by Anonymous Coward
    Why is the National Sanitation Foundation funding supercomputers?

    http://www.nsf.org/ [nsf.org]

    • by 777v777 ( 730694 )
      Go look at the proposal. This machine is for the sole purpose of performing revolutionary computational science. They want scientific breakthroughs from this machine. You have to be trying for those types of problems to get any time on this machine according to the CFP(I think).
  • though it would be at least 6 years too late.
  • Instead of Blue Water, which is singularly inappropriate for a university located 900 miles from the nearest, wouldn't Boneyard be more appropriate?
    • Don't you mean "paved over drainage ditch"?
    • Instead of Blue Water, which is singularly inappropriate for a university located 900 miles from the nearest, wouldn't Boneyard be more appropriate?

      I dunno... Lake Michigan is pretty freaking big, and pretty freaking blue. At least from my personal observations. It's only about ~100 miles from UIUC.

      And yeah, I know that "Blue Water" means in the Navy world, but then again, the Navy does a lot of training on Lake Michigan.

  • .."Blue Waters," which is 500 times more powerful than today's typical supercomputers. The system is expected to go online in 2011.

    But how much powerful is it than supercomputers in 2011? :)

  • I took my grant check straight to the bank. They refused to cash it. When I asked why, they pointed out that it has N.S.F. written right on the front.
  • Oh my, 1 PFLOPS... that's not [stanford.edu] that big anymore. 4 years from now they should be talking 20+ PFLOPS at least.

    I'm very interested in their bandwidth numbers and architecture, which the ydo not mention.

    .
    • Well, they did say petascale. It could be say, 10 or 20 PFLOPS.
    • by scheme ( 19778 )

      Oh my, 1 PFLOPS... that's not that big anymore. 4 years from now they should be talking 20+ PFLOPS at least.

      There's a huge difference between a distributed system offering 1 PFLOPS and a tightly integrated system offering a fast interconnect and a petaflop of computing power. It's kinda of like saying a semitruck isn't all the impressive because you have a fleet of cars that have the same storage capacity. That's great until you need to move a large container or block of stuff that can't be parceled out.

      • by Duncan3 ( 10537 )
        Of course they are completely and totally different things! Which is why I want to know bandwidth numbers and topology. But promising to do a PFLOP in 2011 for that kind of money is not good.

        I'm sure we'll be hearing more, and it will be very nice machine.
    • The machine is supposed to be designed to give a sustained performance of 1 Pflops. The chart in the link you provided above shows peak performance. Most very efficient algorithms use roughly 20-40% of peak performance of a machine, a problem that is enhanced when one goes to large parallel systems. So the machine will have to have a peak performance that is much greater than the sustained in order to achieve this.
  • Could you imagine a Beowulf Cluster of these? Something's gotta run Web 3.0!

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...