Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware

Cray SX-6 Installed in Alaska 198

Dhrakar writes: "Now, I know that normally press releases are imediately round-filed, however, as this is the first NEC^H^H^HCray SX-6 to be installed in the U.S. it is newsworthy. The 8cpu, 64Gb system has been installed at the Arctic Region Supercomputing Center for benchmarking and other testing. See either ARSC or the NY Times (sub. required. Yada, yada) article."
This discussion has been archived. No new comments can be posted.

Cray SX-6 Installed in Alaska

Comments Filter:
  • Were they able to get a discount in not purchasing cooling equitment due to location? I suppose Alaska could be the paradise for heavy metal and overclocking...
    • > I suppose Alaska could be the paradise for heavy
      > metal and overclocking...

      For only about 9 months of the year, probably a shift less. Fairbanks is deep in the interior of the state and is known for pushing 100 degrees farenheit in the summer (and then dropping to 30 below in the depths of January).

      I think Fairbanks even holds a few records for the biggest seasonal variances in temperature.

      Even less extreme parts of the state get to the point where you'd have to install air conditioning to get you through notable chunks of the year.

      • But Yakutsk in siberia probably has a few more... it varies from -71 degrees celcius to toasty warm with mosquitos.

        (-84 farenheit to +102 farenheit according to another account)

        They were on the news a while ago with kids not having shoes (in winter!) because of the financial situation. When it gets below -60 degrees C, some of the equipment stops working (similar to the situation in the UK when it gets below -1 degrees C).

        And they have really nasty floods there too.

      • by 4of12 ( 97621 )

        Fairbanks even holds a few records for the biggest seasonal variances in temperature.

        I wouldn't doubt it.

        I used to live there some time back. The depths of winter would see super lows around -60F sometimes in town where the ice fog [alaska.edu] and carbon monoxide [alaska.edu] from running vehicles would pile up. (You'd be afraid to turn off your car, too, at those temperatures unless you were near an outlet you could plug your engine block heater and battery warmer into.) Fortunately, on the Fbx campus there are lots of parking spaces with such plugs.

        Also, up on the hill where the UAF campus is located, the temperatures in the dead of winter are usually warmer than downtown Fbx, or places southeast of the city (Badger Road).

        I could tolerate the cold with minor inconvenience. You can even wear tennis shoes outside quite nicely for up to about 15 minutes at at time - about the time to go between buildings in the worse case. The more insidious drawback to Fbx in the winter is the paucity of daylight. [nami.org]

        Summertime high temperatures are usually in the 80s in early July; August is the rainy season. I once saw it go into the low 90's, but that's as unusual as going below -60F in the winter.

        Oh, and definitely watch out for the mosquitoes. In the height of the season, the arctic is infested with as many of the little bloodsuckers as the everglades.

        Not to be all down on Fairbanks - there's a lot of wonderful scenery (Alaska range to the south, including Denali(/McKinley). Great rivers, fishing, hunting, backpacking, etc. Frequently you can see the aurora borealis in the winter.

    • You should be aware that air conditioning equipment takes also care of humidity. Putting the computer in an open (to the outside) environment will create a lot of condensed water on the electric boards, which is a very bad thing.
      • Putting the computer in an open (to the outside) environment will create a lot of condensed water on the electric boards, which is a very bad thing.

        Ummm, yeah.

        Last I heard condensation happens when the surface in question is colder than the air in question. I assume (not that I would so much with a supercomputer) that components still get hotter instead of cooler.
    • Well I wouldn't be moving my hardware up there anytime soon, Alaska is seven degrees warmer [nytimes.com] on average than it was 30 years ago.
  • for the record. (Score:3, Informative)

    by Maskirovka ( 255712 ) on Sunday June 16, 2002 @01:00AM (#3710096)
    Before anyone trolls about putting it in Alaska to save on air conditioning, Fairbanks gets into the 80F in the summer. Just thought I'd clear that up.

    Maskirovka

    Is a counter troll still a troll?
    • Why is this post being modded redundant? If you look at the post commenting on the temperature of Fairbanks that's a reply to the thread directly above this, you can see that this thread was initiated 48 minutes BEFORE THE OTHER ONE!

      When will the god damn mods start looking at the stamps? Just because a reply to another thread is higher on the list than a new thread doesn't mean that one was posted before!
  • [flip flip flip]

    Yeah, it's supported by Veritas NetBackup DC already. That and my TI calculator and GBA.

  • ping cray (Score:3, Funny)

    by larry bagina ( 561269 ) on Sunday June 16, 2002 @01:01AM (#3710100) Journal
    I thought cray was dead, but it turns out, they were just using BSD.
  • what a waste! they should give it to me so i can play games on it! who cares about the weather anyway...
  • by chill ( 34294 ) on Sunday June 16, 2002 @01:04AM (#3710110) Journal
    Just in case you want to play with toys like these, the ARSC is looking for an admin [arsc.edu].

  • by rob-fu ( 564277 ) on Sunday June 16, 2002 @01:06AM (#3710115)
    Cray SX-6 Installed at ARSC
    Fairbanks, Alaska - The Arctic Region Supercomputing Center (ARSC) and Cray Inc. (Nasdaq NM: CRAY) announced today an agreement that places a Cray SX-6 at ARSC. ARSC is pleased to be able to offer this leading technology to the wi

    Oh wait a minute, it's a f*cking supercomputer! Sorry about that.
  • by Sivar ( 316343 ) <charlesnburns[@]gmai l . c om> on Sunday June 16, 2002 @01:10AM (#3710126)
    What I am waiting for is the Cray SV2 [cray.com] which can have up to 1024 Cray vector processors. Who needs a beowulf cluster?
  • About the Cray SX-6 (Score:3, Informative)

    by smiff ( 578693 ) on Sunday June 16, 2002 @01:15AM (#3710136)
  • 500MHz ? (Score:3, Insightful)

    by FwOOm ( 22492 ) <fwoom@@@hacksec...org> on Sunday June 16, 2002 @01:19AM (#3710141) Homepage
    A system that can pump out 64 gflops only running at a measly 500Mhz? Really shows how poorly mhz is as a measure of system performance.
  • by Procrasturbator ( 585082 ) on Sunday June 16, 2002 @01:26AM (#3710157)
    It shall be used to create, download, store, and compile the WORLD'S MOST POWERFUL PORN.
  • pricing (Score:3, Interesting)

    by martissimo ( 515886 ) on Sunday June 16, 2002 @01:27AM (#3710161)
    hmm for all the people who wanna figure out what it would cost to run one of theese babies.

    This link [neceurope.com] states in it that:

    The "SX-6 Series" will be shipped from the end of December 2001 with the monthly rental price starting from 2,800,000 Yen.


    By my calculations thats actually only about 22 thousand a month in dollars... not like im gonna be grabbin one, but frankly i would of thought they charge more
  • by Anonymous Coward
    Tux would like it up there.
  • (sub. required. Yada, yada) - Well not quite THAT remote.. Personally I think alaska is TOO big of a cooling solution.
    • Remone? Yeah, maybe... I still often think about moving there before too long, getting out of crowded Kalifornia might be a good idea soon... but I'm a little too comfy in my current job.. but the one posted at this site is real tempting... hmm...
    • Remote YES. Out of touch. No way. Where else in the world can you play with 2 Crays, NEC, IBM SP's, SGI's, a linux cluster, ect ect. and then have to stop so everyone can check out the moose walking through the parking lot. Then get on the phone with someone from Lawrence, or Sandia, or the HPCMO, ect. No traffic, no gangs, ect, and yet it's still one of the highest tech centers in the nations. Way cool place!
  • And my god these machines are beautiful and fast. You won't believe how much they can do. Of course, they're not as fast as the ones used for the nuclear simulations and stuff, but they make your AMDs and Intels look like horse and carriages compared to a Ferrari. I have the honour of building one of these machines. It sucks about 50kW of power. You can only dream of getting one of these machines.
  • They say 8 cpu's, 64 GB ram, 1 TB disk, 64 GFlops peak performance. That hardly sounds like a supercomputer by today's standards. A single processor AMD Athlon is capable of (I think) around 8 peak gigaflops (2 Ghz * 4 SIMD operations using SSE instructions). Similarly the 8 GB of RAM and 125 GB of disk per CPU is in midrange workstation territory. While there's probably a much higher bandwidth memory system than you could get out of an 8-16 node Athlon cluster, it's not clear what problems this Cray unit will really be used for that couldn't as easily be done with a rack full of PC's or workstations.
    • Beside the fact there is no 2GHz Athlon, you forget one very important thing: memory bandwidth.

      A usual Athlon has a theoretical memory performance of 2.1GB/s. Now do 8 gigaops on 32 bit float numbers. That would translate to 32GB/s. So 8 gigaops is not sustainable. Just a short burst.

      And don't forget that that SX-6 has 2048 memory banks. Best Athlon chipsets I know have 1 (in words: one). Best Xeon chipsets have 2.

      So while the raw power of supercomputers and PCs look similar on a sheet of paper (peak performance, AKA speed you can never exceed) supercomputers are built to get most of that performance not only for a short period of time.

      Another topic is price/performance. Here a plain PC cluster might be better. But if you cannot parallelize a problem that much, one fast computer solves a problem faster.

      • But if you cannot parallelize a problem that much, one fast computer solves a problem faster.

        In bioinformatics, one of the more power-demanding applications of super computers, there are many problems that can not easily be split up in smaller independent pieces. 32-bit memory addressing is often a problem as well. Of course these problems can be circumvented, but in the end it all comes down to speed and not having to re-engineer complicated scientific code.

      • People *seriously* underestimate just how pathetic the memory bandwidth is on your standard desktop PC.

        For the coders among you: Suppose you had an algebraic structure datatype that you had test against a set of n! permutations. Standard programming dogma says: Generate the permutations once, store them in memory, and then grab them as needed... right?

        At least on my Athlon XP (and, I suspect, any modern processor with a piece of crap bus)... WRONG. It ends up being MUCH faster to regenerate the permutations from scratch every freaking time you need them, rather than risk having a cache miss and grabbing them from RAM.

        I know you won't believe me, because I didn't believe me at first either. I couldn't imagine that the memory bandwidth was THAT BAD. I coded it up this way to see how much WORSE it performed... and it ended up performing better. An important lesson about optimizing programs for modern Intel/AMD architectures was learned: often times is faster to recompute on the 2GHz processor, rather than wait for the not_2GHz_bus to fetch information from RAM.

        But please, don't take my word for it, go try it for yourself.
        • That's a matter of latency, not bandwidth.

          Mainframes often have several hundred MB (or maybe several GB by now) of SRAM (20 ns latency or so) along with many GB of DRAM. If this 64 GB on the Cray is SRAM that's more impressive. But even SRAM (20 ns is 40 cycles access time) is orders of magnitude slower than on-chip cache memory (1-2 cycles). So the Cray has the same locality issues as a PC.

    • Man, we were just on IRC now, joking about their 500mhz power and saying "I wonder when will the first freak will appear comparing them to their AMD XP, P4 something"...

      I didn't think this time, it would happen... But you did...

      Comparing a supercomputer to your stinky home PC... Bravo! (NOT!)

      *shrug*
    • Actually, SIMD (SSE2) can only get 2 double precision operations in a single clock cycle. The 128-bit wide SSE2 registers can only hold 2 64-bit doubles each.

      The Athlon can execute 3 macro-ops per cycle, but the MMX instructions all take at least 2 macro-ops, so you only get 1.5 instructions per cycle, assuming everything about the execution environment is optimal (all code and data are in cache, there are no conflicts between instructions trying to use the same excecution units, all data is properly aligned, etc.)
      The main difference between a supercomputer and a PC is that the supercomputer operates close to the theoretical maximum most of the time - you actually get something like 90% or better of theoretical performance unless you use terrible code. On a PC, you get close theoretical performance when running benchmarks, and at no other time :)

      (it's like getting an industrial tool versus a consumer tool - the industrial tool has the same specs, but it's meant to run continuously for years without breaking. The consumer tool will overheat, need replacement parts, etc. etc.)
    • I find it fascinating that people continue to try an d compare a machine like this to a pc style system. It is similar to comparing a freighter and a ski boat. Sure the ski boat can go just as fast if not faster. But try to use it to get any work done. hmph! Apples and Oranges comparisons. Lets see someone use a PC to do an ocean model, or how about trying to calculate where that devastating typhoon is going to hit? Sure it may do it. Unfortunately your answer is going to take years to get. A little late. So, you say do it with a linux cluster. Sure, you may be able to do the same type of work. People are. But they are all specialized programs. Try using a cluster for ocean code, weather, fluid dynamics, bioinformatics, magnetosphere prediction, ect ect. You could very well do one or maybe 2 on a cluster. But with this new Cray/NEC there will be all of these codes and then some. All running at the same time.
  • Check out the ARSC's website... they have some pretty snazzy hardware! SV1ex, few other Crays, several big SGIs...

    Wish my .edu had that kind of money!
  • The ARSC is well known for ordering their Crays in custom colors (usually white with black trim). They have some photos of their machine rooms on their website... the only white SV1 I've ever seen! Few other unnaturally white machines too!
  • In adolescence, where Farrah Fawcett should have graced my wall, there was a picture of a Cray SuperComputer in full splendor framing no mere mortal SysAd but a Dude who went by the name ArchAngel. Respledent all in White he and he alone touched the holy of holies. Now it's just dross for drunken /. trolls, oh my lost youth.
  • Now let's see what random and stupid things we can do with this supercomputer:
    1. Find new prime numbers.
    2. Search for Intellegent life.
    3. Crack Crypto.
    4. Play Doom 3 on it.
    Come on now, which one of these sounds the most entertaining?
  • I work for the ARSC (Score:5, Informative)

    by copycats ( 585806 ) on Sunday June 16, 2002 @02:20AM (#3710239)
    And we're looking for an admin.

    Details are here [arsc.edu]

    And yes, you get to play with the new Cray.

    For more information, please contact:

    Pat Babcock, Administrative Assistant Arctic Region Supercomputing Center Butrovich Bldg, Suite 108 P.O. Box 756020 Fairbanks, AK 99775-6020

    Thanks! We're looking for someone with experience with supercomputers.

    • I have to ask, but are you looking for anyone like me:

      Resume
      ------

      I read slashdot daily
      I'm pretty good at quake
      I think computers are just super!

      guess not... urghh, gotta keep looking.
    • Wow, I think this will be the first time the Slashdot effect ever effected a snail mailbox...
    • I'm a bit confused... Are you working for ARSC, or for Cray? One of your earlier posts [slashdot.org] says you "have the honour of building one of these machines". Is assembly really needed onsite when you receive one of those units? I thought they'd put the thing together and test it before shipping...

      BTW, it looks fun to play with so much processing (and electrical) power. Is it fed triphased 600V? Or 208V?
  • What other industry can you get a job in Alaska [arsc.edu] or Hawaii [mhpcc.edu] doing the same thing? You might even end up inventing the next Mosaic [uiuc.edu] out in the cornfields. Gotta love them pork-barrel politics!
  • Where's the obligatory Beowolf Cluster comments?


  • how can it help you get a date???

  • 8 athlons would melt all the snow in alaska

    (obpost)
  • Sounds like something apple would use in an advert
  • The Supercomputing Center for benchmarking and other testing? "Other" huh?

    Can this be some new hardware for the National Missile Defense that Bush is building over in Alaska? [bbc.co.uk]
  • The 8cpu, 64Gb system has been installed at the Arctic Region Supercomputing Center for benchmarking and other testing

    2847 in Content Creation Winstone.
    3000 in Business Winstone.

    Ok, pack it up. Next!

  • How about a rack full of dual processor anthlons? Oh - that is not one computer? Oh - sorry - you draw the boundries where you want but when all machines are running the same 3-D geophysical migration it seems to me that they are one machine.

    I'm not impressed. I'll bet that the anthlon rack will compute circles around that cray and cost far less. Not only this, individual units can be pulled and fixed or replaced rather easily.

    I'm reading down further at the comments about comparing the stinky desktop PC to a "super computer" and I have to chuckle at the ignorance. The company I'm thinking of that put the anthlon rack in place for the 3-D migrations had an Alexis (sp) then about 100 sparc's networked. As one of the bigger geophysical processing shops in Calgary and Houston I rather think that they know what they are doing.
    • If they wanted a cluster, they would rather use a cluster of 32-way POWER4 machines (p690). These big irons scale much better than SMP Athlons and have faster RAMs and Bus systems. Finally, they're much more reliable than PC hardware.
    • I am not really sure what the spec is on the cray, but just trying to imagine a cluster of atholons trying to access 64GB of non-uniform memory, across network latency, bus, and then through the memory sub-system, and finally to the main cpu that is trying to read and write to the same memory location as that of the other god knows how many other cpus (assuming a cluster of 240 nodes), and the software complexity to manage all that memory, not to mention to manage the cache coherency, making it like a CC-NUMA system, which using standard pc components rely on very complex software to provide an interface just to provide memory management, it is already quite complex.

      With the cray, with less cpus to deal with and bigger foot print of main memory, each cpu would have more to work with where as the cluster would have less to work with per node.

      Another thing to consider is that these are vector processors which already have a solid base of development for weather simulations, nuclear bomb testings, and such ungodly application usage. (Which is why the PS2 is treated as a munitons because it too is also a vector processor.)

      I am not writing this to put down the work done for in the area of beowolf clusters and the like. But you have to look at the application that is being used for and what they intend to do with it.

      Another thing to consider is why should you try to get a x86 processor to do vector processing? It is like when cyrix tried to do floating point instructions a few years ago in software because it didn't want to put one in. It could never outperform a FPU that could do floating point calculations on hardware. In order for you to do vector processing, you would need to do what the cray does on hardware emulated in software. Just might not work... maybe in transmeta though. :-)

      Anyone have thoughts on this?
    • Clusters are nice and all but your missing a few big points...

      Throughput:
      Having whiz-bang fast processors is nice, but only if you can get the data to them fast enough. Why do you think processors and OS's engage in all these elaborate caching schemes... If ram was even 50% as fast as the CPU, you'd see a marked improvement immediately, but only until you had to get something off of disk. Now if you could get your Mass Storage at 50% as fast as RAM, the world would be a better place...

      Another really crippling aspect of the PC is the horrid PCI bus. We need to just throw PCI away, or religate it to the realms of COM ports. PCI-X is on the horizon and that will some inprovement, but what we really need is to start getting the interconnects faster.
  • So >that's what's melting the glaciers.
  • The 8cpu, 64Gb system has been installed at the Arctic Region Supercomputing Center for benchmarking and other testing.


    it gets about 923749083274fps in quake III ;)
  • Alaska's temperature has risen 9 degrees in the past century. Why the hell are they installing super computers there? Maybe they should put their heat transfer unit inside a glacier.

    (I know it's inisignificant amount of heat increase, but still... May be a start of a trend?)
  • Is this show going to go Fairbanks and help the super computing geeks find hot dates with hot ski bunny types?!

    --
    Billy Corgan: Billy Corgan, Smashing Pumpkins.
    Homer Simpson: Homer Simpson, smiling politely.
    • Let me tell you something, I am a student at the University of Alaska Fairbanks and there are NO hot ski bunny types to speak of. In fact most men in Fairbanks look better dresses as women then the women look dressed as women. The new Cray is cool though because students can check-out processing time on these things for "worthwhile" projects, such as calculating the odds of finding a girlfriend in Fairbanks (about 1 to a 10,000,000).
      • by kyoko21 ( 198413 )
        One VR helmet: $5000
        One VR mouse: $1000
        Integration cost of the VR equipment to the CRAY: $25,000 (roughly)

        Spending the rest the next 12 months with your VR girlfriend in true cyberspace: Priceless.

        There are things you can buy, and there are things you can build, and then there are things you can get buy building and buying and having a perverted imagination.

  • Too bad somewhere along the way we lost the American Way [geocities.com].
  • We have several systems at work with up to 24 gigs of RAM and 18 CPUs. Why is the installation of this thing that important?

    If it had 64 terabytes of RAM, it'd be interesting.

    - A.P.
  • They were supposed to sell a number of SX machines since they struck the deal with NEC. This is the *only* one they managed to sell, and since they didn't have much money left to sustain their own SX-6 installation in Chippewa Falls (which was the actual first US SX-6 installation), they sold their machine to the ARSC, with a deal that lets them use it sometimes for testing and training.

    My feeling is that they are utterly uninterested in selling SX systems, they'd rather sell their more profitable SV systems or their crazy MTA systems (woohoo, they managed to build ONE).

    Disclaimer: I used to work for Cray in their SX support team and with HNSX Supercomputers before that, the North American subsidiary of NEC for supercomputers. I left of my own will, I wasn't part of the friday-right-before-Christmas round of layoff Cray did.

It isn't easy being the parent of a six-year-old. However, it's a pretty small price to pay for having somebody around the house who understands computers.

Working...