Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Education Supercomputing Hardware

Indiana University Dedicates Biggest College-Owned Supercomputer 83

Indiana University has replaced their supercomputer, Big Red, with a new system predictably named Big Red II. At the dedication HPC scientist Paul Messina said: "It's important that this is a university-owned resource. ... Here you have the opportunity to have your own faculty, staff and students get access with very little difficulty to this wonderful resource." From the article: "Big Red II is a Cray-built machine, which uses both GPU-enabled and standard CPU compute nodes to deliver a petaflop -- or 1 quadrillion floating-point operations per second -- of max performance. Each of the 344 CPU nodes uses two 16-core AMD Abu Dhabi processors, while the 676 GPU nodes use one 16-core AMD Interlagos and one NVIDIA Kepler K20."
This discussion has been archived. No new comments can be posted.

Indiana University Dedicates Biggest College-Owned Supercomputer

Comments Filter:
  • Biggest? Really? (Score:4, Interesting)

    by wonkey_monkey ( 2592601 ) on Monday April 29, 2013 @04:30AM (#43579059) Homepage
    Computers used to be a lot bigger.
    • Re: (Score:2, Interesting)

      by Anonymous Coward

      How can you tell? In TFA, you just have a photo of one corner of the building.

      • Comment removed based on user account deletion
        • by Anonymous Coward

          Yea, hard to tell scale, but you can see a security camera and the outside lighting, so it seems to be one-story....

        • by cdrudge ( 68377 )

          IU is a public university. Would you prefer tax dollars get spent on a masterpiece of architectural design for a data center?

          • by gtall ( 79522 )

            Depends upon what you mean by public. Most of their money these days come not from the state but from tuition, grants, etc. Many "public" unis are in the same position because lawmakers have increasingly found education to be not worth their while.

        • by Z_A_Commando ( 991404 ) on Monday April 29, 2013 @09:36AM (#43580605)
          It's a Tier 3 data-center built to withstand F5 tornadoes and earthquakes. All the pretty glass stuff doesn't really survive in 300MPH winds. Also, the main receiving area in the back looks like something out of Jurassic Park. And in Bloomington, they think limestone is very pretty.
    • by Chrisq ( 894406 )

      Computers used to be a lot bigger.

      But the largest computer ever built [wikipedia.org] (physically) executed only 75,000 instructions per second, and had 70k of memory (though that was 32 bit words")!

  • by blackicye ( 760472 ) on Monday April 29, 2013 @04:37AM (#43579093)

    Cray is still kicking around??

    • by fuzzyfuzzyfungus ( 1223518 ) on Monday April 29, 2013 @05:35AM (#43579229) Journal

      Cray is still kicking around??

      I don't think they've kicked out an original processor design in ages; but they are still(among) those you talk to if you want something a little more tightly coupled, and/or a bit more 'turnkey' than "10,000 of whatever dell is selling, and some 10GbE switches".

      • by RicktheBrick ( 588466 ) on Monday April 29, 2013 @07:53AM (#43579759)
        Maybe it is because Seymour Cray died in 1996 of complications from a automobile accident. http://en.wikipedia.org/wiki/Seymour_Cray [wikipedia.org] I would think that most people who have a passing interest in supercomputer would have known that fact. Seymour Cray did a lot of work in establishing the supercomputers. I can remember seeing his supercomputers on the cover of Popular Science back in the 70's. I think he deserves a little more respect than shown here by that remark.
        • Maybe it is because Seymour Cray died in 1996 of complications from a automobile accident. http://en.wikipedia.org/wiki/Seymour_Cray [wikipedia.org] I would think that most people who have a passing interest in supercomputer would have known that fact. Seymour Cray did a lot of work in establishing the supercomputers. I can remember seeing his supercomputers on the cover of Popular Science back in the 70's. I think he deserves a little more respect than shown here by that remark.

          I meant no disrespect to Seymour Cray, I just didn't think the company was still all that relevant any more.

      • by Anonymous Coward

        Google "cray blackwidow". Last custom cray-designed vector processor

  • by Anonymous Coward

    until some wannabe comedian makes a "Does it run Linux?" post. Despite the fact that it's one of the earliest /. memes and has been used over a million times, it will get moderated "+5 Funny" because originality and creativity are lost on this crowd.

  • It's great to see a university have a monster like this for research use. And old universities you would think are well suited for these kinds of monsters. Their computer centers were built at a time when the computers really were filled with monster machines that your iPad would run circles around today performance-wise. They were replaced in the 1990s by servers that would fit into a closet. But they still have all this space that can be filled with racks upon racks of supercomputer nodes. However, I

    • by mendax ( 114116 )

      Ack.... it's 3:30 in the morning and I can't spell. It's "probably dull TO look at". That ought to be modded down just for that little slip.

    • Check this one out... [degiorgi.math.hr]

      (More broadly, though, the point is largely valid. Reel-to-reel deserved to die, technologically; but damn did it look 'high tech' churning away in the background, now that everything fits in standard 72u racks, it's mostly just a 'Should we go for 'basic black, or spring for custom powdercoat and a cool cutout design for the doors?' game.

      Of course, seeing as the CM-2 [uic.edu] won 'coolest-looking computer of all time', with the CM-5 playing 'solid; but ever-so-slightly-disappointing-sequel', p

      • by dbIII ( 701233 )
        Not ten feet away from me is a box of nine track reels - it's still not dead yet. It should be, and it would be if people had transcribed their media within a sane timeframe, but it's still in use on occasion. I don't know if those reels in that box will ever be read again since they were a third copy in 1982, but I had to get a few transcribed up to a couple of years ago after the original owners threw out their copies and then found out a decade later that they wanted the data.
        • About 50 feet from me there's about 100 boxes of microfiche that represent billing records from the 1960s on back. I have no idea why we still keep them, but they're there sure enough.

          • Not ten feet away from me is a box of nine track reels...

            About 50 feet from me there's about 100 boxes of microfiche...

            About a mile away from me is something called a Library, filled with things called "books" and "magazines" - I hear they're like papery blogs. I have no idea why we still keep them, but they're there sure enough.

    • by mwvdlee ( 775178 )

      http://newsinfo.iu.edu/pub/libs/images/usr/15356_h.jpg [iu.edu]
      Not too shabby looking for a line of racks. Please donate your $0.02 to a charity of your own choice.
      Though from the looks of it, I expect it to be mostly calculating ballistic trajectories originating in eastern Europe.

      • by mendax ( 114116 )

        Looks rather boring to me. But I'm old school... and after just made my first visit to the Computer History Museum in Mountain View in five years and seen bits of the beauty of old room-filling computers from the last sixty years on display, I can say with some certainty that the IU machine is dull to look at.

  • by kurt555gs ( 309278 ) <kurt555gs@nOsPaM.ovi.com> on Monday April 29, 2013 @05:30AM (#43579219) Homepage

    Can you imagine how many Bitcoins this thing could mine per hour?

    • by Anonymous Coward

      The bitcoin pools are mostly PCs with GPU rigs, and they often number into the tens of thousands, so basically this supercomputer is puny compared to even the GPU bitcoin pools.

      • by flowerp ( 512865 )

        Bitcoin is now dominated by FPGA and ASIC miners (dedicated hardware), most GPU farms have moved on to litecoin.

      • by Anonymous Coward

        RTFS... in what world are 676 Kepler K20s called puny?

        • The Bitcoin world.

          Kepler is good for stuff involving lots of double-precision floating-point, like scientific computing. Physics, chemistry, stuff like that.

          AMD has a lead on Bitcoin mining because a) it's integer, not float32 or float64, b) AMD has a shitload of slower cores rather than a smaller number of more efficient, powerful cores, and c) due to some weird coincidences of architecture, AMD designs (both VLIW5 and GCN, IIRC) can run the "main loop" of Bitcoin mining in only one instruction, while Nvid

  • They'll just use it to mine Bitcoins I'm sure.

  • It's already found 2 verified bitcoins and paid for the first month's electric bill.
  • by Virtucon ( 127420 ) on Monday April 29, 2013 @07:04AM (#43579549)

    While it has been in vogue for years for universities to have this capability in-house, I have to question the wisdom of this kind of investment in a few areas. First, recently there was an article on Slashdot posted about the Federal Government retiring Roadrunner because in less than 5 years because it was too much of a power hog. [digitaltrends.com] I haven't seen anything in the press releases about Big Red that would indicate that IU has solved the power obsolescence issue; in five years, we'll probably see Big Red II retired because it wasn't power efficient given newer technology. IMO in five years, IU will be looking to fund Big Red III so I hope they get their value out of this investment, total operating costs (TOC) because it has to be very, very expensive to keep the lights on for this thing. Second, with Utility Computing models available in the Cloud with AWS, Google Apps etc. for large scale experiments, more and more companies are choosing the utility model to run their research rather than buying it. I don't need to cite them all here but there's stories day in and day out of companies and universities leveraging utility based, cloud models for HPC. You have one resource here at IU when you could lease multiple Cloud based resources with hundreds of thousands of nodes simultaneously, not just rely on one large machine in your data center. I can imagine there are quite a few experiments that IU can do with it, but when I read their press, it's available to IU students and Faculty, does that mean they won't let other academic institutions use it? If that's true it's a very expensive resource that only one institution can use and I doubt that they can keep it busy 24x7x365 for its useful life with experiments. Maybe I'm wrong but I just can't see this kind of large scale investment being feasible over the coming years because it will just be too inexpensive and disposable to run it in a Cloud based model.

    • It might help to know that many universities can't just use cloud services willy nilly. There are only a few people on campus that are technically allowed to agree to a license agreement (doesn't mean others don't, but for something big it becomes important). Those license agreements have to go through significant negotiations to ensure all requirements for state laws governing the university are met, all NIH or other grant requirements are met, etc. Just the contract negotiations alone can take a year o
    • by gtall ( 79522 ) on Monday April 29, 2013 @07:50AM (#43579745)

      Indiana University is not simply a university; it is a state school system with several regional campuses. Oddly enough, Purdue is Indiana's second state school system with its own regional campuses. They both share a campus at IUPUI (Indianapolis). I'd be very surprised if there is any free time left for this. And if there is, IU would likely just lease it to Purdue.

      All in all, it is probably cost effective for them to do this. They are unlikely to have made this decision in a vacuum; they are well aware of the alternatives. (I'm an IU alum).

    • by riley ( 36484 ) on Monday April 29, 2013 @08:29AM (#43579981)

      Cloud computing is not appropriate for all types of research computing. Let's say you want to use Amazon's cloud offering, but you have a genomic and geospatial dataset of 60 TB. While not ubiquitous in research computing, it is not unheard of, especially in the fields of bioinformatics. The cost of storage and the cost of transfer will each away at whatever grant that is funding the research. This is a business decision. Does the cost of the computing resource and operation result in [ more grants / better faculty retention ] than not having it?

      The cost-benefit analysis has been done, and while cloud computing has its place, there are additional costs that make it problematic. The cloud is not a panacea.

      That said, in five years IU could very well be looking for its next big computer. The average lifespan of a supercomputer is 5-8 years. So, five years is on the early side of looking for the next big thing, but not outrageously so.

      Disclaimer -- I run high speed data storage for a university. I've written acceptance test measures for high performance computing resources. I've done the cost-benefit analyses.

      • I don't run HPC on 60 TB datasets or do anything remotely like that but I know that for me, storing a video game or something on a local disk is cheaper and higher performance than to signup for an ISP that gives me four aggregated SDSL links (symetrical 20Mbits/s) and rent Amazon storage.

    • by godrik ( 1287354 )

      Well, that's a valid questions. And depending on the case, the answer can be different. Many applications (especially the ones that will use GPU clusters) will need a good interconnect. Cray provides that on its machines. Last time I checked cloud platform, they did not have a suitable interconnect (10Gig ethernet has high latency).

    • quit using the cloud word; what are you, a marketing choad?

      multiple "cloud based resources" (i.e. just another goddamn bunch of servers on the internet) don't have the high speed network interconnects of a supercomputing cluster

  • The IU machine at 1 PFLOP would rank around 24th in the world and 11th in the U.S. (http://www.top500.org/list/2012/11/).
  • What is the advantage having two different GPUs in one node? Any idea?

    • There aren't two different GPUs in a node, the AMD Interlagos is a Bulldozer Opteron, made of two dies on the chip, same die as the FX-8150 CPU.
      CPU nodes run newer Opteron, about 10% more effcient, made of two Piledriver dies (similar to the FX-8350). It's weird that two different kind of CPU are used but that's probably because the GPU nodes were already made, validated etc.

      • Well, the summary says there are two different GPUs per node, or is this just missleading and they mean a CPU + GPU on a GPU node while the CPU there is different from the one on CPU nodes?

        • It's implied that a GPU node has CPU + GPU, because that GPU won't do anything at all on its own, not even able to talk to the networking or to run an OS (until some future generation of GPU includes CPU cores). The summary said it were "GPU-enabled nodes", too. Then the reader is supposed to understand (know already) that the AMD 16-core chip is a CPU, or infer it somehow.

          Alright, I'm only now seeing the last sentence's ambiguity, the trick is you don't see what's wrong when you understand all the techno-b

          • Year, after your post, while I was typing, I guessed that.
            Thanx for the clarification, anyway! (I'm a software and process guy, as I lost interest in the most modern car models, or aircraft models etc. I don't realy have a clue right now about CPUs)

  • So do they have Bobby Knight in a closet kicking chairs to power this thing?

  • by MrLizard ( 95131 ) on Monday April 29, 2013 @09:45AM (#43580717)

    ...if this got as much attention in the local press as throwing a ball into a basket does.

  • by Richy_T ( 111409 )

    So who was it dedicated to?

In the long run, every program becomes rococco, and then rubble. -- Alan Perlis

Working...