Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Supercomputing Hardware Technology

Titan Supercomputer Debuts for Open Scientific Research 87

hypnosec writes "The Oak Ridge National Laboratory has unveiled a new supercomputer – Titan, which it claims is the world's most powerful supercomputer, capable of 20 petaflops of performance. The Cray XK7 supercomputer contains a total of 18,688 nodes and each node is based on a 16-core AMD Opteron 6274 processor and a Nvidia Tesla K20 Graphical Processing Unit (GPU). To be used for researching climate change and other data-intensive tasks, the supercomputer is equipped with more than 700 terabytes of memory."
This discussion has been archived. No new comments can be posted.

Titan Supercomputer Debuts for Open Scientific Research

Comments Filter:
  • Why not have it figure a way of helping us build clean energy sources and reduce contamination? The climate changes all the time. We should learn to live with it.

    • by mcgrew ( 92797 ) * on Monday October 29, 2012 @01:57PM (#41807727) Homepage Journal

      I see that the guy who moderated you insightful is as ignorant of computers' workings as you are. Computers don't figure things out. There is no such thing as an "electronic brain" or a "thinking machine." Computers are nothing more than huge electronic abacuses. They don't figure things out, the scientists figure things out (theorize) and then test their theories using computerized models when they can't do direct testing.

      • Jeebus! I'm just saying that climate change has become a bullshit distraction and nothing more than a way to get unlimited funding, but all these people are as hysterical about it as they are about terrorism. We don't need to use megawatts of power to predict what's going to happen 50 or 100 years from now like it's all gonna happen over a five second event. Oh well, I guess I have to assume you didn't get the gist of my original post, which was pretty much do what you can to reduce pollution regardless how

  • I'm waiting for the ShellShocker promo in my email before I upgrade to this baby.
  • by HappyHead ( 11389 ) on Monday October 29, 2012 @12:41PM (#41806171)
    The memory they list as an exciting "700+TB" is not actually all that exciting - if you divide that by the number of nodes, and then the number of CPU cores, that leaves only 2GB of ram per CPU core, which is pretty much standard for HPC cluster memory. The only thing impressive about this really, is the number of compute nodes involved, which any single submitted job will _not_ have access to all of. I manage similar, though smaller, research clusters myself, and frankly, the only clusters we had that had less than 2GB per CPU core were retired long ago. Essentially, this means they're running the cluster with the minimum amount of memory that is considered acceptable for the application.
    • Would it be actually useful? Yes, you'd gave more memory in total, but any given amount of memory for a computational job would have constrained bandwidth. As far as I understand it, this is the Achilles' heel of modern machines: What use is a large memory to you when you can barely keep the execution units busy, even with caches? Especially in HPC, whenever the coherence of accesses just isn't there.
      • by Anonymous Coward

        Not really... some applications (e.g., fluid dynamics simulations) scale just fine.

      • Larger memory per node is useful when manipulating stupidly huge data sets. Sometimes speed isn't the most important aspect in getting the calculations done, and other factors come into play, like memory size/bandwidth, disk space available, speed of that diskspace, and even network connectivity if you're doing MPI programming.

        While I realize it would be great to teach everyone efficient programming techniques, so they could streamline their memory usage down to the bare minimal, it's not always possib
        • by gentryx ( 759438 ) *
          If you need more memory, simply allocate more nodes. Problem solved. Hardly anyone needs more than 2 GB/core.
    • Re: (Score:3, Informative)

      by Anonymous Coward

      Actually, that's not quite true: it is possible to submit a job request for all 18,688 compute nodes and in fact the scheduling policy gives preference to such large jobs. It's true that there aren't very many applications that can effectively use all that many nodes, but there are a few (such as the global climate simulations). You're correct about the amount of ram per CPU core, though.

    • The XE6 that my team uses allocates jobs reservations at the node level. Each job gets a whole node of 16 cores with 32G ram. If you have a memory intensive task, you only run use as many cores as will fit in the available memory. It's a trade-off: some tasks will waste RAM, some will waste CPUs?
    • by gentryx ( 759438 ) *

      Titan is a capability machine which distinguishes it from capacity machines. As such it designed for large/extreme scale jobs (which includes full system runs). I expect the techs are just now prepping Linpack for the next Top500 at SC12.

      The ratio of 2 GB/core isn't going away anytime soon. The reason is: a) the speed per core is stagnating, thus adding more memory per core just means that one would end up with more memory per core that it could process in a timely manner and b) if you need more memory, you

  • by CajunArson ( 465943 ) on Monday October 29, 2012 @12:48PM (#41806285) Journal

    GPU means graphical processing unit. Now consumer GPUs are pressed into service for compute tasks like BOINC & folding, but they are still GPUs (they can still do graphics).

    Does Nvidia even bother to put in the graphics-specific silicon and output hardware on the K20, or should these things really be called.. I dunno.. "compute accelerators" or something like that?

    • General Processing Units
      • That's the problem though... the K20 is definitely not "general" but highly specialized. Throw the right type of problem through optimized CUDA code and it'll run great. Throw it the wrong type of computational problem and it'll go nowhere fast. That's specialized instead of general.

    • by Shatrat ( 855151 )

      Maybe you should consider the origin of the word? http://en.wikipedia.org/wiki/Graph_(mathematics) [wikipedia.org]

      • You have it backwards... graphics have been around since the time of the caveman. "Graph theory" only came into existence in the late 19th century and took its original cues from hand-drawn graphs..which are a type of graphics. Plus, adding and multiplying numbers, which is basically what the K20 does on a huge scale, is by no means an operation that is limited to graph theory.

    • by Anonymous Coward

      With teraflops of single and double precision performance, NVIDIA® Kepler GPU Computing Accelerators are the world’s fastest and most efficient high performance computing (HPC) companion processors. [nvidia.com]

      The K20 really is still built on graphics-specific architecture, but it would be a waste to include the output hardware, just think of all the servers that have never had a monitor attached.

  • It's a great and important tool for policy makers to be able to crunch this magnitude of data, but not being able to do this is not the problem wrt climate change.

    The problem is purely political, specifically, American conservatives are denying this science the same way they deny the science of evolution, the same way they deny the overwhelming proof that smoking causes cancer and second hand smoke does the same, the same way they denied CFCs caused a hole in the ozone layer and risked all our lives on th

  • I don't know if I'm alone in this, but I kind of miss the days when supercomputers wern't just clusters of off the shelf components. I feel we've lost something.
    • On the other hand, the modern GPU is much closer to the vector units of "classical" supercomputers than anything minis/PCs of that era had.
      • True enough. I had an account on a Convex C2 supercomputer when I was in university which was very vectorised. In our department, we were able to mount the 9-track tapes ourselves. Yeah, that's how old I am!
    • But then again it does tell you that the off-the-shelf components we all use are none too shabby. For, as we are all too sick of hearing, the boxes we use right now well outpace those custom-built super computers [wikipedia.org] created in the days of yore. Okay. Maybe not even yore, maybe even less time than that. But still...
    • I don't know if I'm alone in this, but I kind of miss the days when supercomputers wern't just clusters of off the shelf components. I feel we've lost something.

      HPC is being forced to use off-the-shelf components. There is the funding for R&D of application-specific hardware.

    • +1. It's hard to care about a "faster" computer when faster just means more nodes. Wow, how are we ever going to top that one? Just build one with more nodes. It's become much more a question of money than innovative technology.

  • Rather than another supercomputer, couldn't they spend the money on actually upgrading Oak Ridge's infrastructure so the buildings aren't falling apart, and 80-year-old nuns can't walk through the perimeter fence?
  • The official list of Top 500 [top500.org] (last updated 2012/06) states "Sequoia - BlueGene/Q, Power BQC 16C 1.60 GHz, Custom" as the number one super computer. Sequoia is nearly as powerful as Titan.
  • Cause I mean - their like, so FAST. Right? Right? Surely not grubby old, crufty old Linux - right?
    • You are absolutely wrong. 75% of super computers run on Linux. Go and see [top500.org].
      • You are absolutely wrong. 75% of super computers run on Linux. Go and see [top500.org].

        Shocking! Say it ain't so! It must be because nasty old Linux stole all that technology from Bill Gates and Steve Jobs.

      • by fa2k ( 881632 )

        You are absolutely wrong. 75% of super computers run on Linux. Go and see [top500.org].

        I thought that sounded low, so I went and checked at http://i.top500.org/stats [top500.org] . Linux has 92.4 % of the top 500. Then you have "Unix" at 4.8 and Mixed at 2.2.

  • Giveaway upon giveaway. "Hand tuned" CAGW models have been reality constrained GIGO now for decades because they do not represent a full set of physics [nipccreport.org]. Glad AMD has stuck x86 processors out this far but not so much for this boondoggle.
  • 20 petaflops of performance...

    ...700 terabytes of memory

    Pfffft that all?!

  • by SeanAhern ( 25764 ) on Monday October 29, 2012 @04:34PM (#41810099) Journal

    My favorite part of the article is the photo that accompanies it. Two of my scientific visualizations are on there, the red/yellow picture of an Alzheimer's plaque being attacked by drugs (behind the N of TITAN) and the silver structure of a proposed ultra-capacitor made from nanotubes (to the right of the N).

  • I wonder how that would do in a Beowulf cluster!

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...