Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Hardware

Time For A Cray Comeback? 266

Boone^ writes "The New York Times has an article (free reg. req.) talking about Cray Inc.'s recent resurgence in the realm of supercomputing. It discusses a bit of Cray's decline when the Cold War ended, "the occupation" under SGI, and the rebirth of the company after the Tera (now Cray Inc.) purchase. Recently Cray Inc. has been shipping their vector-based Cray X1 machine, designing ASCI Red Storm, and recently was one of 3 (also Sun, IBM) to win a large DARPA contract (PDF link) to design and develop a PetaFlops machine by 2010. Could Cray Inc. be poised for a comeback? Wall Street seems to think so."
This discussion has been archived. No new comments can be posted.

Time For A Cray Comeback?

Comments Filter:
  • by Anonymous Coward on Monday August 04, 2003 @05:44PM (#6609889)
    Partner Link [nytimes.com]

    Posting as Anonymous Coward, please award my Karma to starving children in the world.

  • Naturally. We have another Bush in the Whitehouse, and I even hear the Wang Chung is making a comeback -- so why not Cray?
    • by Frymaster ( 171343 ) on Monday August 04, 2003 @05:55PM (#6609990) Homepage Journal
      waitaminnit. cray - the computer of the defense industry during the colde war - is releasing a machine called the "red storm"?

      is there a secret message here? should tom ridge be called?

  • by Anonymous Coward on Monday August 04, 2003 @05:44PM (#6609898)
    SCO vs. Cray [yahoo.com]
  • Icon is back (Score:2, Insightful)

    by aspelling ( 610672 )
    Many scientists are very concern about state of supercomputing in US. Hopefully new generation of supercomputers improve this situation.
    • Oh yeah, real concerned. The top US supercomputer can only do 20 TFLOPS [top500.org] or so. That will never do.

      Imagine a beo...

      • The top US supercomputer can only do 20 TFLOPS or so.

        And that's only the ones we know of.

        The NSA probably thinks top500.org is rather amusing.

        • Re:Icon is back (Score:2, Interesting)

          by mOdQuArK! ( 87332 )
          Maybe, maybe not. I don't really think even the NSA is _that_ far ahead of commercial process technology. It's more likely that they do custom designs for whatever applications they need, which allows them to process their data much faster than any general-purpose setup.
        • Re:Icon is back (Score:5, Interesting)

          by CausticWindow ( 632215 ) on Monday August 04, 2003 @08:09PM (#6611019)

          I remember a story from a NSA contract worker.

          In the early days of Cray, he and many others were wondering how they could keep things running, considering that their official budgets only showed ten or so sales per year.

          Until he got the tour of the NSA computer plant, where they had a hall the size of two football fields, filled with Crays.

      • Except that system is a distributed memory cluster, not a single image vector system like the Japanese (wich is over 40tf BTW).

        I would like people to stop calling clusters supers :). Sure 10 million rabbits could pull a few train carts, but that doesn't make them a train engine :)
  • by Pope Raymond Lama ( 57277 ) <gwidionNO@SPAMmpc.com.br> on Monday August 04, 2003 @05:47PM (#6609919) Homepage
    Of course I expect that...in my Playstation IV,
    equipped with an opto-quantic Emotion Engine VI
    and a couple petabytes of holographic storage.

  • Definately (Score:4, Informative)

    by Anonymous Coward on Monday August 04, 2003 @05:48PM (#6609924)
    There are still MANY applications for supercomputers. A lot of people think that linux/beo-clusters are going to be replacing supercomputers of the Cray/NEC/IBM variant. Not true. There are still many research, scientific, and military applications that require machines developed not for "slow" distributed number crunching, but require ultra high speed processor and memory architechtures.

    So definately, time for Cray to come back and retake the supercomputer industry crown.
  • 2010? (Score:5, Funny)

    by stratjakt ( 596332 ) on Monday August 04, 2003 @05:53PM (#6609975) Journal
    There's a whole bunch of PETAFlops outside of McDonalds right now having a sit in and screaming about how fur is murder.

    I had to literally step on their faces to get a Big Mac.

    • Re:2010? (Score:2, Funny)

      by taernim ( 557097 ) *
      Fur is murder?
      So McDonalds is selling McFur burgers then?
      Hmm... maybe we /should/ listen then...

      I don't mind meat, but I generally draw the line at eating hair... ^_^
    • If the Big Macs at that McDonald's have fur, I wouldn't want them, either.

  • by SuperDuG ( 134989 ) <be@@@eclec...tk> on Monday August 04, 2003 @05:54PM (#6609976) Homepage Journal
    ... but wouldn't the fact the market for supercomputers isn't exactly that large. I mean you've got governmental contracts (research, educational, who knows what) that have to take up 95% of all the purchases made, and then a small private market. I mean how many companies are striving for a petaflop machine to run their database server?

    If you look at the list of top 100 supercomputers, there are systems that are almost 15 years old or even older (not sure on a few). I know these take years to build and are multibillion dollar projects, but between time has got to be a killer.

    Then there's the question of ... what do you need a supercomputer for? The applications are pretty limited for a need for a petaflop computer, unless your doing mass storage, cryptography (cracking), or simulations.

    Don't get me wrong I'm all about nuclear testing being done in 1's and 0's instead of in the ocean or in the desert, but how big of a bomb do you really need when it's estimated theres enough nukes to blast the entire land surface of the earth 3 times over.

    • by MxTxL ( 307166 ) on Monday August 04, 2003 @06:01PM (#6610042)
      Then there's the question of ... what do you need a supercomputer for?

      To advance the state of the art. And not just in the field of computers, but also in any field that ends up benefitting from this. Which is potentially very many. Aerospace, geology, meterology... there are BUNCHES of fields that greatly benefit having more and more massively powerful computers. Sure, most projects can't afford to have the latest and greatest of the state of the art in supercomputing, but the fact that the state of the art progresses will push prices down on the older technologies that most labs CAN afford. This is a benefit for science as a whole.
    • by anzha ( 138288 ) on Monday August 04, 2003 @06:03PM (#6610054) Homepage Journal

      There are other uses too. Consider: the weather guys that are working on the global warming and other climate modeling want a 500 petaflop sustained speed, massive memory machine to get the granularity that they want.

      BTW, what's the 15 YO machine? I can't think of any...certainly not ones that are still in the Top 500. Hell, the ones I worked on 10 years ago, you can nearly buy the floppage on the desktop now...

      As an interesting aside, the DARPA contract is out in part because they think the traditional drivers in computing speed are going to peter out around 2010...the implications of that are definitely interesting, no?

    • by agurkan ( 523320 ) on Monday August 04, 2003 @06:03PM (#6610060) Homepage
      Nuclear simulations are used to see if the warheads are still effective after not being used for long times, not to see if they'll wipe out a city right after they are produced.
    • by Doesn't_Comment_Code ( 692510 ) on Monday August 04, 2003 @06:11PM (#6610115)
      Then there's the question of ... what do you need a supercomputer for? The applications are pretty limited for a need for a petaflop computer, unless your doing mass storage, cryptography (cracking), or simulations.


      You're missing the big picture...

      Massive multiplayer Quake on a 614,400 x 819,200 screen.

      Thank you Cray.
    • by Pharmboy ( 216950 ) on Monday August 04, 2003 @06:15PM (#6610142) Journal
      Don't get me wrong I'm all about nuclear testing being done in 1's and 0's instead of in the ocean or in the desert, but how big of a bomb do you really need when it's estimated theres enough nukes to blast the entire land surface of the earth 3 times over.

      Well, the earth is over 2/3rds covered with water, and now we have the technology to reach the moon, mars, venus and beyond. Remember the spectical when a comet hit Jupiter? Just imagine a Beowulf of those, but really big nukes instead :D

      On a more serious and less morbid note, I bet some other uses exist in physics, medicine and even cosmology. I even hear where they compare 'potential' cures for diseases using computer modeling to design drugs that we don't yet know how to make, good old biotech. You are correct that yes, this IS a very very limited market, but when you sell them for a billion bucks each, you don't need to match Dell's volume to make a profit. I wouldn't be suprised if the technology leads to some advancements in our pitiful micro world as well.
    • by binaryDigit ( 557647 ) on Monday August 04, 2003 @06:15PM (#6610154)
      Don't just think about solving a static problem faster, it's also about solving a problem better through the use of more variables. Take weather simulation. If having too many variables stretches todays forcast into next week, then it's useless. So you limit the amount of variables to come up with a "close enough" forcast in a more timely manner. With a faster computer, you can get a more accurate simulation in a more reasonable time period. This increase in accuracy/complexity is then useful in many fields.

      • Don't just think about solving a static problem faster, it's also about solving a problem better through the use of more variables.

        You're right and all, but the pedant in me can't help but point out that it's not necessarily sucha qualitative difference as you suggest. First, pick the numebr of variables that you want to use. Now, it's a static problem, and the only difference between two machines is how fast each one will solve it.

        On th other hand, you've got a good point, in that the difference can
    • by morcheeba ( 260908 ) on Monday August 04, 2003 @06:28PM (#6610235) Journal
      Yep, you are a bit wrong... (you didn't think a challenge to the slashdot community would go unnoticed?!)

      From this site [top500.org], you can see the breakdown by organization:
      Usage..... Count Share Rmax Rpeak Procs
      Industry... 202 40.4 % 82398 182964 62869
      Research... 131 26.2 % 187689 278030 120046
      Academic... 115 23 % 77143 133564 45216
      Classified.. 27 5.4 % 14167 20691 12892
      Vendor...... 22 4.4 % 11033 15545 5230
      Government... 3 0.6 % 1317 2256 528
      Total...... 500 100 % 373749 633052 246781
      There are a lot of companies that use supercomputers, although maybe not the type you're thinking of. Of course, there are the number-crunchers: oil companies are big users (to crunch data & find new oil), and car companies (BMW). But there are also the transaction-processors, like SprintPCS and Ebay (used to be in the top 500), that make the list just by the sheer number of connected processors.

      Here's the latest list [top500.org]
      • by SpikeSpiff ( 598510 ) on Monday August 04, 2003 @06:42PM (#6610342) Journal
        To me, the 5.4% classified is improbable. The same defense establishment that kept the $100s of millions stealth fighter secret for five years can certainly keep multi-million dollar computers secret.

        Especially because it's so much easier to hide a computer than an airplane. No sightings in area 51....

        We have to assume that the state of the art is way past the public data. Cray has a "lousy" $150 MM in yearly revenue. They could be spending 10X that on heavy computing for national security. The government is spending $25BB on intelligence and another $400 BB on defense every year. Cray could be a drop in the bucket, even a red herring. I'd love to know what is going on in the basements at Fort Meade.

    • Then there's the question of ... what do you need a supercomputer for? The applications are pretty limited for a need for a petaflop computer, unless your doing mass storage, cryptography (cracking), or simulations.

      Actually, this sort of machine would be a total waste for mass data storage. On the other hand, there are a great many private sector uses for this sort of machine. Ford for instance runs a number of their crash test simulations on Cray vector machines.

  • by Anonymous Coward
    Cray died. Anything else is just bartering on his name.
    • The name Cray is synonymous with speed and high end performance. Of course many clued up folks are building their own solutions with the power of clustering.
  • Saw this earlier today. My first thought was how cool it was to see the old cray logo again. More than that, 'tho, I can see some real possibilities here. Since home computers are increasingly looking like supercomputers of yore, it will be interesting to see if any of this technology trickles down to the home market. I want a CRAY AMD box.
    • That's like Ferrari selling a cheap-ass subcompact shopping cart. Cute idea for folks who can't get the real deal, but it aint gonna happen. And if it does, don't expect home market pricing - eg, SGI's Intel Workstation affair.
  • Sun Enterprise 10000 (Score:3, Informative)

    by DNS-and-BIND ( 461968 ) on Monday August 04, 2003 @05:58PM (#6610022) Homepage
    Didn't Sun basically buy out or hire away a bunch of Cray, Inc.? I always heard the E10000 was actually a Cray product. Oh, and just to brag, I have a blue jacket with a picture of a Y-MP-90 on the back with the words, "CRAY - WORLD'S FASTEST SUPERCOMPUTERS". Too cool for words. Ebay rules.
    • The jacket ain't cool, it's sad, it's on a similar level of a propeller cap.
    • by putaro ( 235078 ) on Monday August 04, 2003 @06:54PM (#6610465) Journal
      The E10000 is a Celerity product. Celerity was an independent Unix box maker back in the 80's with their own processor architecture. Celerity went bust trying to bring a "minisupercomputer" version of the architecture to market in about 1987 (33 MHz, whoo hoo!). The assets and technology of Celerity along with the design team in San Diego were acquired by Floating Point Systems (FPS). FPS brought the system to market and made the transition to a SPARC based architecture (66 MHz) before going bust. The assets and technology of FPS along with the design team in San Diego and now the manufacturing team in Beaverton were acquired by Cray. Cray did a couple of turns of the crank on the FPS product and sold it as a "business supercomputer". When Cray was acquired by SGI, SGI wanted no part of the SPARC business and sold (yes, again) the San Diego design team (and I think the Beaverton group) to Sun who finally brought a SUCCESSFUL product to market with the E10000.

      But it's still the same core team down in San Diego, so I like to think of the E10000 as being a Celerity product.
    • I always heard the E10000 was actually a Cray product.

      I believe that the origin of the E10000 was from a company called Floating Point Systems. They were working with Sun to develop a SPARC based parallel processing computer. If I remember correctly, Cray bought FPS, and later SGI bought Cray. Since the FPS machine was SPARC based, and since SGI wasn't interested in SPARC, they sold it to Sun.

      So in a roundabout way, the E10000 came from Cray, but I think it was mostly from the original Floating Poi

      • by laird ( 2705 )
        Quite a few of the people working on the E10K were from Thinking Machines Corporation. TMC was Danny Hillis' company that introduced massively parallel supercomputing. The first generation machine was a Symbolics workstation coordinating up to 65,536 single-bit CPU's connected by a hypercube network. Each CPU was fairly slow, but there were tons of CPU's and CPU performance was balanced nicely with network throughput (whereas most MPP machines have fast CPU's starved for data). Weird, but also astoundingly
  • I was just on their site looking at the machines. Weird, weird, weird.
  • Not Found
    The requested URL / was not found on this server.

    Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request.
    Apache/1.3.26 Server at www.tera.com Port 80

    ---

    Looks like they need to host their website on a SUPERcomputer to handle a Slashdotting! (Noooooobody expects a Slashdotting! [slashdot.org] :)
  • by anzha ( 138288 ) on Monday August 04, 2003 @06:07PM (#6610079) Homepage Journal

    The home page [cray.com] at Cray for the Cascade project.

    There are some interesting PDFs there. Chew, mull, and consider.

    Also consider what Horst Simon, head of NERSC [nersc.gov] said here [hoise.com] too.

  • That's good (Score:2, Funny)

    by Anonymous Coward
    Maybe now that there's once again a major player in the computer market with machine casing designs even SILLIER than Apple's, the rest of the Geek Community will give us a little slack..
  • by Tumbleweed ( 3706 ) on Monday August 04, 2003 @06:15PM (#6610141)
    Why do people buy those really expensive supercomputers, when they could just buy an Apple one instead? They're much cheaper!
  • by taradfong ( 311185 ) * on Monday August 04, 2003 @06:20PM (#6610184) Homepage Journal
    ...isn't 'Cray' today about as 'Cray' as the company that now owns 'Atari'? What's left besides the name of the original company?
  • by baryon351 ( 626717 ) on Monday August 04, 2003 @06:25PM (#6610220)
    OK this is about as much a kiddy thing as how many VWs fit inside a football stadium or something, but... ...anyone know of a site with info on how current and past supercomputers compare to current desktops? Where are we at now with 2GHz G5s and 3.3GHz P4s, relatively?

    One of the comparisons made when I was at university was of a 30-something MHz 386, with a supercomputer from 1973, showing how they do about the same amount of processing/data transfer but in completely different ways. I found that fascinating
  • Comeback? (Score:5, Insightful)

    by virtual_mps ( 62997 ) on Monday August 04, 2003 @06:27PM (#6610229)
    Probably not. Cray made some money back when a supercomputer was something that an ordinary company might need. The capabilities of "normal" computers was much more limited then today, so there was a much higher percentage of the buying public likely to want something more. These days the vast majority of users are happy with something mainstream

    But, you ask, isn't there a lunatic fringe who wants more power at any price? Well, the lunatic fringe ain't what it used to be. During the heyday of cray you got a damn fine box and nothing else. Cray didn't want to worry about your software--or even an OS. A person who needed the speed would plunk down the money for the box and then pay a couple of guys to code everything from scratch. Those days are gone--software is the driving factor these days, and people are far less willing to buy something that's going to force a total code rewrite. Especially if that thing is only going to buy them a couple of years of edge before they need to recode for the next best thing.

    Then there's the question of whether cray can afford to be bigger. The answer is "probably not". If you sell to a lot of customers you need a huge support infrastructure. Cray doesn't have much of one anymore, so they'd need to buy one. (Most of the old support guys left one way or another when SGI came in, or stayed with SGI.) If you have a lot of customers you can spread the costs around, but in the case of a company like cray a support infrastructure means having a people sitting around most of the time in every region you sell a machine. Maybe two to four guys per system (24x7, right?) plus some sorta warehouse facility if you enter a new geographical market. That's expensive. You can bill a lot of that cost back to the customers, but that just makes your systems less competetive.

    I think the long term answer is that cray will be a very small niche player, selling to a very select group of (U.S.) government agencies, with the occasional pro forma business customer thrown in so the company can issue press releases. Even most government facilities aren't in a position to buy a cray anymore. (Research money is fairly tight, recoding costs are prohibative, MTBF's are more of an issue then they used to be, etc.)
    • Re:Comeback? (Score:4, Interesting)

      by VoidEngineer ( 633446 ) on Monday August 04, 2003 @07:55PM (#6610929)
      Hmm... I'm not entirely convinced by your arguments. However, I do agree with you that "during the heyday of cray, you got a damn fine box and nothing else."

      My thinking, however, is that the same is true today and for all of the top 100 supercomputers in the world. That is to say, each one of those machines is a custom hardware installation, and my educated guess is that software still isn't the driving force in the supercomputing market. Rather, algorithms are the driving force. The supercomputer market is geared towards people who want to very specific tasks, very acurately, and very fast. Example applications might be calculating fourier transforms (spectroscopic analysis), mendelbrot sets (weather simulations), prime numbers (cryptography), and statistical derivatives (markets). Any of these types of applications could feasibly require only a few thousand lines of code... At the same time, however, any of these applications are fully capable of utilizing as much hardware resources as you have available...

      The problem is the magnitude at which these few lines of code need to be repeated. Furthermore, each of these types of algorithms can give qualitatively different and more robust results at each order of magnitude increase in speed... thereby creating a driving market force for upgrades.... We have a computer that can predict the weather 48 hours from now? Well, give us a computer that's 10 times as powerful, and we'll predict it 56 hours from now... Give us one 100 times more powerfull, and we'll predict the weather 62 hours from now, and so on, and so on... The point I'm trying to make is that the software isn't the driving force behind these supercomputers... the algorithms are... and the optimized hardware is what the organizations are paying hard cash for, in order to calculate those algorithms fastest.

      Remember, we're talking about supercomputers here... we're certainly not talking about super-electronic-typewriters, super-spreadsheet-applications, super-databases, super-webservers, super-videoeditors, etc. etc. Nor are we necessarily talking about super-von-neuman machines, super-turring-machines, or super-mainframes. We're talking about supercomputing and the Cray corporation... the company historically responsible for building the machines which simluated the weather and nuclear explosions for many years... I suspect that there are not many end users of such machines and that user interface software is kept at a minimum... ;-) Furthermore, I also suspect that if Cray Inc. built a zettaflop or yottaflop abacus and provided instructions on how to simulate the weather, people around the world would abandon their computers and begin taking abacus lessons... Remember, it's all about the hardware and algorithms in supercomputing...

      But, I'm not a physics or computer science major, so what do I know... That, and I'm beginning to ramble... just my $0.02 worth...
      • Re:Comeback? (Score:5, Insightful)

        by virtual_mps ( 62997 ) on Monday August 04, 2003 @09:06PM (#6611394)
        My thinking, however, is that the same is true today and for all of the top 100 supercomputers in the world. That is to say, each one of those machines is a custom hardware installation,

        Yes and no. The problem is that a cray box has to cover the whole R&D cost for an entire system. When IBM sells you an SP2 most of the R&D is spread across their much higher volume business lines. Same with an intel based cluster--the technology specific to the HPC market is basically the interconnect, and the rest is subsidized by video game players. There's also the compiler cost (you don't sell many fortran compilers outside the scientific market) but the salaries for a few compiler writers is much lower than the cost of desiging a cutting-edge cpu from scratch.

        At the same time, however, any of these applications are fully capable of utilizing as much hardware resources as you have available.

        That's always true. The question is whether they can use the resources efficiently, and whether the cost/op is competetive. You're right about the algorithms being the driving force, but I'd argue that it is unusual for an algorithm that's optimized for one architecture to run optimally if you move it a radically different architecture. People can spend years trying to squeeze a couple more percent out of their code, and they don't want to start from scratch unless there's a very good reason. Then there's the problem that researchers tend to not work in a bubble. Even if you can afford to buy the most expensive machine on the block you might end up shooting yourself in the foot if nobody else in your field can collaborate with you.

        user interface software is kept at a minimum

        You've got that right--most of the examples I've seen are pretty...spartan.
    • Re:Comeback? (Score:5, Insightful)

      by Rasta Prefect ( 250915 ) on Monday August 04, 2003 @08:01PM (#6610968)
      Probably not. Cray made some money back when a supercomputer was something that an ordinary company might need. The capabilities of "normal" computers was much more limited then today, so there was a much higher percentage of the buying public likely to want something more. These days the vast majority of users are happy with something mainstream

      Cray has never sold computers that are anything like a normal company would need. Cray machines are made for heavy number crunching - Vector processors are made for simulation tasks. They're very good at them. However they perform abyssmally at most other tasks - buying one for use as say, a database or application server would be stupid.

      But, you ask, isn't there a lunatic fringe who wants more power at any price? Well, the lunatic fringe ain't what it used to be. During the heyday of cray you got a damn fine box and nothing else. Cray didn't want to worry about your software--or even an OS.

      Last time I checked Cray shipped UNICOS with their machines. It's a fairly BSDish UNIX variant. It's a bit of an oddball, but not all that much more of a PITA than say, IRIX or AIX. Want to port your beowulf apps? No problem! When I spent a summer working on a T3E all of our multi processor apps used MPI. Vectorization of C and FORTRAN apps is largely taken care of by the compiler. So wheres all this programmer investment you're talking about? Most of the kinds of apps that you're going to run on a Cray (Weather models, crash simulations, Gaussian for chemical sims, etc) already run on a Cray, and you're probably going to be modifying them anyway.

      I think the long term answer is that cray will be a very small niche player, selling to a very select group of (U.S.) government agencies, with the occasional pro forma business customer thrown in so the company can issue press releases. Even most government facilities aren't in a position to buy a cray anymore. (Research money is fairly tight, recoding costs are prohibative, MTBF's are more of an issue then they used to be, etc.)

      Cray isn't in the selling large business systems. Cray is, always has been, and likely always will be a competitor in the scientific computing market. Yeah, this means they're not going to be a Sun or IBM that sell to business customers for business needs, but that's not the sort of company they're trying to be so the comparison is pointless. They're selling machines to people who need to do heavy duty number crunching. This means Universities, government agencies and large companies doing lots of product research. Typically the cost of using these sorts of machines is spread around - frequently instead of buying the machine, you'll go to a company like Network Computing Services and buy time on a machine. It works out well. There will always be a certain number of organizations that need this sort of heavy duty computing power, and Cray will be there to serve them.

      • Re:Comeback? (Score:4, Insightful)

        by virtual_mps ( 62997 ) on Monday August 04, 2003 @08:51PM (#6611268)
        Cray has never sold computers that are anything like a normal company would need. Cray machines are made for heavy number crunching - Vector processors are made for simulation tasks. They're very good at them. However they perform abyssmally at most other tasks - buying one for use as say, a database or application server would be stupid.

        I don't recall saying that cray was trying to sell general business machines. But even for scientific applications, the number of customers who need a cray as opposed to being able to use a commodity cluster is much lower then the number who needed a cray instead of an IBM 360. There are businesses out there who use computers for more then spreadsheets and web servers. By "ordinary company" I meant to draw attention to that part of the market whose budget isn't classified.

        Last time I checked Cray shipped UNICOS with their machines. It's a fairly BSDish UNIX variant. It's a bit of an oddball, but not all that much more of a PITA than say, IRIX or AIX.


        I guess you didn't do much porting of mainstream applications to a cray. The lack of virtual memory, the funny type sizes in C, and other things that application writers make assumptions about (things that aren't technically guaranteed to work in ANSI C but do work on every other system in the world) could make porting a real problem. Things have gotten a lot better, but I can assure you that a unicos port of, say, perl or gcc was not in the same league as an irix port of the same app. One of the things cray is finally bowing to is the demand for virtual memory. Seymour never wanted it (didn't want the performance hit) but it's real hard to sell that in today's marketplace. The question is how much cray can back off of its old "speed is king" philosophy when their whole business is making fast computers.

        Want to port your beowulf apps? No problem! When I spent a summer working on a T3E all of our multi processor apps used MPI.

        You've kinda missed the boat. The point of the cutting-edge cray supercomputers isn't to run mpi apps--those do quite nicely on commodity clusters. The T3E is a MPP super--not a vector super. It's where cray was 10+ years ago, not where they want to be tomorrow. The point of cutting edge is to create new paradigms. That definately helps your performance, but it kills your compatibility.

        Vectorization of C and FORTRAN apps is largely taken care of by the compiler.

        Wow. Let's just say that when you're on the kind of project that can command the state of the art you don't depend on compiler autoparallelization.

        So wheres all this programmer investment you're talking about? Most of the kinds of apps that you're going to run on a Cray (Weather models, crash simulations, Gaussian for chemical sims, etc) already run on a Cray,

        Please, read up on the tera system, for example, and try to understand how it's different from a T3E.
  • He's a great musician! It's been a long time since he had a CD released. Probably due to the RIAA cutting back on CD releases. Well, it's long over - wait....

    Oh!.... that Cray!

    Never mind!
  • 85 replies, even the trolls, and not one "Imagine a beowulf cluster of these" post.

    Can't a guy count on slashdot for anything anymore?

    --
  • Gimme (Score:5, Funny)

    by Cyno ( 85911 ) on Monday August 04, 2003 @06:36PM (#6610291) Journal
    My next couch should be a Cray..
  • Economics of Scale (Score:5, Informative)

    by dprice ( 74762 ) <daprice@nOspam.pobox.com> on Monday August 04, 2003 @06:43PM (#6610346) Homepage

    In the 1970's and 1980's, Cray and other supercomputer companies fit in the niche of "fastest computing at any cost". The design cycles were long for the specialized hardware that pushed the boundaries of the available technology. Companies and government agencies were willing to pay the high price since there was enough processing speed difference between the supercomputers and the "vanilla" computers.

    By the early 1990's, the "attack of the killer microprocessors" came. The PC class processors were still weak, but the higher dollar RISC processors used in workstations, like Sun, were reaching performance levels close to what the supercomputers were able to deliver. Since they were based on higher volume and more standardized processors, the price/performance of the RISC workstations started eating into the mainframe and supercomputer market. Many of the supercomputer companies died off, and some started to incorporate RISC processors into their designs. By the mid 1990's I believe that Tera and Cray were the last remaining old-school supercomputer companies left. The rest either died or were absorbed into other companies.

    Today, the investment required to produce the fastest processor chips is so high that it requires large unit volumes to pay for the cost of development and production. The PC class processors, with their high volumes, are putting pressure on the old style workstation market, where each company makes their own processor (SPARC/Sun, PA-RISC/HP, Alpha/DEC). We see Sun struggling as the PC's eat their market. Even some large scale supercomputers are based on the PC processors. The majority of the computer spectrum from low to high end is based on the same families of processors (Intel, AMD, PowerPC).

    So that brings us to Cray/Tera. Cray seems to go against the economics of scale that drive the rest of the computing industry. What keeps them running is a small niche that the government is willing to keep funded. It is similar to the funding of exotic bombers and fighter jets. We probably won't see Cray grow much larger than they currently are. They be kept running since they form a critical part of the national security, at least that is what the government believes.

  • But seriously, Cray is about 75 miles south from me. It would be really cool to take a tour of their plant and see the X-1 in person. At best my dual PIII 1GHz machine is good for about 640 MegaFlops, but just one of the X-1 node modules is 50 GigaFlops =) X-1 video [cray.com]

  • by putaro ( 235078 ) on Monday August 04, 2003 @06:47PM (#6610391) Journal
    Supercomputing per se died because Intel, DEC, IBM/Motorola had a lot more money to throw at speeding things up than the supercomputing community.

    In the 70's up until the early 90's it was possible to build a custom CPU out of discrete logic that ran significantly faster than the available microprocessors. Cray was able to push their clock cycle down into the nanosecond range through clever design. However, a 1ns clock rate == 1GHz. You can go buy that multi-million dollar CPU for a couple of hundred bucks in today's market.

    In order for superocmputing to be viable you have to be able to provide quantum leap performance above the commodity hardware AND keep your cost/performance ratio in line as well.

    The CRAY-1 came out with a clock speed of about 80 MHz and vector processing and high memory bandwidth at a time when mainstream systems like the PDP 11/70 were running at about 7MHz with a 1MB/s memory bus. Microprocessors weren't even't a joke compared with the Cray.

    The new Japanese NEC supercomputer came with a price tag of about $160 million if I remember correctly (some estimates say that it took $1G in research funding) and hits 35 TFlops (sustained). #3 on the Top 500 supercomputers list is a Beowulf cluster with 2304 processors coming in at 7.6 TFlops (sustained). Even figuring $2000/processor + interconnect, that puts the Beowulf cluster at around $5 million or 1/32 of the cost for 1/5th of the performance (roughly speaking).

    There are other factors, of course, but the key is that for the supercomputer to stay ahead of the microprocessor a boatload of funding is needed for the supercomputer and the payoff just isn't really there. If it was a lot more supercomputer companies would still be in business.
    • The new Japanese NEC supercomputer came with a price tag of about $160 million if I remember correctly (some estimates say that it took $1G in research funding) and hits 35 TFlops (sustained). #3 on the Top 500 supercomputers list is a Beowulf cluster with 2304 processors coming in at 7.6 TFlops (sustained). Even figuring $2000/processor + interconnect, that puts the Beowulf cluster at around $5 million or 1/32 of the cost for 1/5th of the performance (roughly speaking).

      Number of TFLOPS isn't everything.

  • by Kargan ( 250092 ) on Monday August 04, 2003 @06:57PM (#6610495) Homepage
    The Sandia National Labs supercomputer (code name: Red Storm), currently being built by Cray, is going to be powered by 10,000 Opteron processors [amd.com]. A 40 Teraflop theoretical peak will put it at the top of the supercomputer list, being approximately 4 Teraflops faster than the NEC Earth Simulator, the current champ.
  • I really want to see cray come out with more waterfall computers. I thought that was the greatest thing in the world when I saw it on Beyond2000! way back in the day. The contemporary "elegant mac" isn't even in the same aesthetic/functional dimension as that cray machine.

    Ah, glory days.
  • by mre5565 ( 305546 ) on Monday August 04, 2003 @08:31PM (#6611145)
    If you could ask Mr. Cray, he'd might say that
    SRC Computers [srccomp.com] is his legacy, not Cray Computer Corp.
    He co-founded this company (with several other
    ex-Cray employees) and died while still an employee/owner.

    Interestingly, SRC is still around without any evidence on their website
    of shipping a product. My guess is that their customers and/or investors
    prefer to stay out of the limelight.

  • What a poor choice of acronym. How confusing.
  • by Styx ( 15057 ) on Monday August 04, 2003 @08:46PM (#6611247) Homepage
    I've been using Desktop Cray [xosx.com] for a while now. It took me some time to weak the settings to perfection, but now it's just running along. Check it out!
  • by Indy1 ( 99447 ) on Monday August 04, 2003 @11:04PM (#6612239)
    John Markoff, the same jerkoff that wrote the less then factual articles and book about kevin mitnick, and happens to belong to one of the less reputable media outles (aka the plagarized and false stories coming from the ny times).
  • by Baldrson ( 78598 ) on Tuesday August 05, 2003 @12:35AM (#6612709) Homepage Journal
    It can be argued that Cray died an early death as a result of attempting to revolutionize the semiconductor industry from the chemistry up -- but the question is "Was Seymour Cray right about Gallium Arsenide?" He made the fatal error of attempting to run an organization much larger than his historically successful organizations -- organizations that were no bigger than an extended family -- 50 or so.

    However it happens, it is unlikely Cray was wrong about Gallium Arsenide -- he was not stupid. The question is when will a bureaucratic organization be able to throw marching morons at the problem and make it happen -- since that appears to be the only way technology is funded anymore.

    It's unfortunate Seymour allowed Cray, Inc. to keep his name after he left to found CCC. Even though Cray himself was capitulating to massively parallel silicon in his final days -- he did die almost immediately thereafter.

    PS: It seems creepy he died in a "jeeping accident" -- because that's exactly the way I had portrayed him dying in an April fools joke faxed to all members of congress a few years before -- an "accident" following shortly on the heels of CCC being taken over by Craig Fields of DARPA. I was sending out the joke because of the horrifying way DARPA had spent money on silly favorites within the academic community while guys who were really pushing the envelope like Seymour were going begging for customers -- having acquired private investments.

  • by evilviper ( 135110 ) on Tuesday August 05, 2003 @03:30AM (#6613333) Journal
    I think there is one single reason that the market is poised for a Cray comeback... HEAT!

    Commodity PCs managed to push the speed envelope by pushing the heat envelope... That's the main reason AMD took the speed advantage, because they were willing to operate their processors at higher temperatures than Intel would at the time.

    Now, I would say it's quite a different story. First off, processors are getting closer and closer to the end of the line for heat increases.. Pretty soon, no known metal will be able to conduct heat away fast enough to allow computers to operate at room-temperatures. Even now, dumb little personal computers need serious cooling solutions... Either that, or they need to be some place that has serious air conditioning.

    So, what are companies going to do, even with the current line of processors? Should they invest loads of money in dispersing waste heat, powerful air conditioners, system cooling fans, and software and/or hardware to closely monitor temperatures? OR Should they invest in a higher-end system that doesn't put off so much heat, doesn't use up so much electricity, etc?

    In fact, I think we are even nearing the point where home users are going to get seriously pissed off and start demanding lower-power systems... It's interesting that C3 processors have become so popular despite their lowsy perfomance... (Maybe AMD/Intel will learn something from that)

    So, I do think that either commodity processors will hit the heat ceiling, and stagnate like the rotational speeds of current IDE hard drives, OR the electrical and major cooling requirements of commodity processors will become too much to justify the small price savings. Either way, that will leave the market wide open for serious computing companies once again. The only question really is how much longer will it be until one of those two things happens? Well, in the Southern California Desert, electricty prices are still very high, and the temperatures are so very high that running a modern computer 24 hours a day requires your home cooling to also be running 24 hours a day, just to operate within the heat tolerances. I don't think it will be much longer before more of the country, and the world, will reach the same point.

Make sure your code does nothing gracefully.

Working...