Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Supercomputing Hardware

Cray Unveils Its First GPU Supercomputer 76

An anonymous reader writes "Supercomputer giant Cray has lifted the lid on its first GPU offering, bringing it into the realm of top supers like the Chinese Tianhe-1A" The machine consists of racks of blades, each with eight GPU and CPU pairs (that can even be installed into older machines). It looks like Cray delayed the release of hardware using GPUs to work on a higher level programming environment than is available from other vendors.
This discussion has been archived. No new comments can be posted.

Cray Unveils Its First GPU Supercomputer

Comments Filter:
  • by Anonymous Coward on Tuesday May 24, 2011 @05:32PM (#36233452)

    ...will promptly be used for mining BitCoins.

    • by TWX ( 665546 )

      But is it compatible with Duke Nukem Forever?

      Sadly, the machine I casemodded in full Duke regalia in anticipation of DNF back in 1997 is wholly incapable of running the game, and since it's AT form factor it ain't gettin' upgraded...

  • A Beowulf cluster of these!
    • Meh, it's got "blades" -- it might as well be a Beowulf cluster.
      • by taktoa ( 1995544 )

        A Beowulf cluster of Beowulf clusters is not a Beowulf cluster, it's a multidimensional Beowulf cluster.
        Likewise, a BOINC of Beowulf clusters, or a "jagged Beowulf cluster", is not just a Beowulf cluster.

        • by jd ( 1658 )

          You'd want to make a MOSIX cluster of Beowulf clusters, so as to allow for each cluster to appear as a node without any conflicts. To make it 3D, you'd use a Kerrighed cluster of MOSIX clusters of Beowulf clusters.

    • All your cluster grits are belong to us

  • Kraken Cray XT5 (Score:1, Interesting)

    by Dremth ( 1440207 )
    I did some rough calculations regarding NICS's Kraken Cray XT5 and bitcoin mining. FYI, The Kraken was the 8th fastest supercomputer in Novermber of 2010. I determined that if the supercomputer put forth all of it's resources to mine bitcoins, it could generate 1,511.61 per day (or about $8,450.53/day). Granted, the Kraken has just regular CPU's doing the calculations. I could only imagine what a Cray supercomputer with GPU's in it would be capable of...
    • Re:Kraken Cray XT5 (Score:5, Informative)

      by icebraining ( 1313345 ) on Tuesday May 24, 2011 @06:26PM (#36233998) Homepage

      Uh, no you couldn't. The rate of bitcoin creation is fixed (it's about 50 BTCs / 10 mins, for now). If you add more computational time the system will adjust and it'd become proportionally harder to generate them, so the global rate would keep stable.

      So despite the 100 thousand-fold increase in mining difficulty in the past 15 months, the network continuously self-adjusts itself to issue one block of Bitcoins about every 10 minutes. The difficulty increase is entirely caused by users competing between themselves to acquire these blocks.

      http://blog.zorinaq.com/?e=49 [zorinaq.com]

      • by Dremth ( 1440207 )
        Ok, yes, but the difficulty would increase for everyone mining as well. Last I checked, the entire bitcoin network had a mining strength of 1,747 Ghash/s. The Kraken alone has about 367 Ghash/s. That's 21% of the entire network. With all that power coming into the network at once, you're still bound to make a TON of bitcoins, because you're essentially taking a substantially large portion of bitcoins from other miners. I did neglect to factor in the scaling of difficulty (and that's why I said it was a roug
        • Last I checked, the entire bitcoin network had a mining strength of 1,747 Ghash/s

          Then you're out of date, it's 3653 Ghash/s [bitcoinwatch.com] now.

          • by Dremth ( 1440207 )
            My mistake then. I was basing my information off of this: http://www.bitcoinminer.com/post/5622597370/hashing-difficulty-244139 [bitcoinminer.com]

            Total network hashing: 1,747 Ghash/sec

            Either the network strength has significantly increased in the past week, or one of those two sites shouldn't be trusted. Your source looks more reliable.

          • by qubezz ( 520511 )
            That means with ten of these, you could have the majority of the compute power, enough to earn majority trust and poison the bitcoin system with your own fake transaction records and wipe everybody's funny-money into oblivion. I'm sure the NSA has enough compute power they could do that now if they wanted to crypto bitcoins instead of your emails for a day. The FBI would probably pay as much attention to a bad actor crashing a fake currency as they would someone hacking your WoW and selling your Traveler's
            • by Intron ( 870560 )

              You are ignoring that it costs more to run a supercomputer than it could generate in bitcoins.

      • Wow. So that's why I left Bitcoin on for four days straight and didn't mine a single coin.

        Explain to me again why anyone is going to be running background Bitcoin processes in 2015?

        • by Dremth ( 1440207 )
          I would imagine that by 2015, the mining for bitcoin will have slowed quite a bit. But, by then it should have hopefully gained enough popularity that it can function as just a p2p economy.
        • by qubezz ( 520511 )

          You need to join a pool. Acting alone, and at the current rate processing power is being added with all the pump and dump slashspam, you likely won't win a 50 coin fabulous prize if you left your computer running for four years.

        • by maxume ( 22995 )

          If the apparent value of bitcoins is far higher than the cost of electricity needed to generate them, people will still run clients.

    • The Kraken reportedly consumes about 2.8 megawatts of power, so assuming your figures are accurate, the power alone would cost about $6,720/day (at $0.10/kWh) for a "profit" of $1730/day. Factor in the fact that it's a $30 million machine with a very short usable lifespan (i.e. massive depreciation), and they'd be losing a ridiculous amount of money.

  • Comment removed based on user account deletion
    • by jd ( 1658 )

      Occam is higher-level than C or Fortran, and it should be possible to adapt Erlang to parallelize across a cluster.

    • You're forgetting things like PGAS and other higher-level parallel programming models. MPI is the dominant technology in use so these machines have to support it well. But they also support more future-looking tools.

    • Re: (Score:3, Interesting)

      by Anonymous Coward

      Really, you've tried it and it made you want to jump out of a window? OpenMP is an extremely simple, easy to use add-on to the C language. It is one of the two current standards used for parallelized scientific computing, and although it will eventually be succeeded by a language with more features, it will be difficult for its successor to match its ease and workmanlike grace.

      I honestly have trouble believing someone could have much difficulty with it. If you want to have the work in a "for" loop paralleli

      • by KainX ( 13349 )

        MPI != OpenMP

        HTH.

      • by Coryoth ( 254751 )

        What did you find awful about UPC? I've foudn it very pleasant to work with.

        • Have you tried it off-node?
          • Yep. Works great!

          • by Coryoth ( 254751 )

            Have you tried it off-node?

            Yes, and it works just fine; the only issues would be if I got my placement wrong via poor layout/blocking or I neglected to upc_memget something that I needed intense access to but for some reason couldn't make local in the initial layout. Neither of those require much forethought at all to avoid.

    • by MaskedSlacker ( 911878 ) on Tuesday May 24, 2011 @06:58PM (#36234268)

      The point is not for the job to be easy for your lazy ass, the point is for the code to execute as quickly as possible.

    • by dbIII ( 701233 )
      It depends on the task. Some of them are not complex. Some things are so embarrassingly parallel that you just tell the first node or whatever to apply a function to the first lot of data, feed the next lot to the next node and so on - then just concatenate the results together at the end. There's a lot of stuff in geophysics like that, for example - apply filter X to twenty million traces (where a trace is just like an audio track). You could do that with twenty million processor cores if you had the
    • There's more information on the GPU programming model in the HPCWire article. It is OpenMP directive-based, making it quite a bit easier to use than low-level CUDA and other such things.

  • by LWATCDR ( 28044 ) on Tuesday May 24, 2011 @06:23PM (#36233950) Homepage Journal

    There is still a lot of HPC applications written in Fortran with this run them?
    Also how hard if any of a porting will be needed to get good results from this.

    • by s.d. ( 33767 )
      PGI makes a CUDA Fortran compiler [pgroup.com], but with GPUs, it's not as simple as just recompiling, the code has to be rewritten to take advantage of the accelerator and it's unique architecture.
      • by LWATCDR ( 28044 )

        So I guess the second part of the question is. Have the HPC libraries been ported yet. I have heard one of the big reasons that Fortran is still so popular is the large library of highly optimized HPC libraries. The other reason is that Fortran is supposed to be really easy to optimize which I can believe.

  • Into the Realm? (Score:4, Informative)

    by David Greene ( 463 ) on Tuesday May 24, 2011 @06:28PM (#36234010)

    bringing it into the realm of top supers like the Chinese Tianhe-1A

    Uh, Cray already has machines in service that blow Tianhe-1A out of the water on real science. Tianhe-1A doesn't even exist anymore. It was a publicity stunt. Cray is already making the top supers. It's others that have to catch up.

  • It's made of cores!

The fancy is indeed no other than a mode of memory emancipated from the order of space and time. -- Samuel Taylor Coleridge

Working...