
Time For A Cray Comeback? 266
Boone^ writes "The New York Times has an article (free reg. req.) talking about Cray Inc.'s recent resurgence in the realm of supercomputing. It discusses a bit of Cray's decline when the Cold War ended, "the occupation" under SGI, and the rebirth of the company after the Tera (now Cray Inc.) purchase. Recently Cray Inc. has been shipping their vector-based Cray X1 machine, designing ASCI Red Storm, and recently was one of 3 (also Sun, IBM) to win a large DARPA contract (PDF link) to design and develop a PetaFlops machine by 2010. Could Cray Inc. be poised for a comeback? Wall Street seems to think so."
Registration not required (Score:5, Informative)
Posting as Anonymous Coward, please award my Karma to starving children in the world.
Definitely coming back (Score:2, Funny)
Re:Definitely coming back (Score:5, Funny)
is there a secret message here? should tom ridge be called?
Re:Didn't Sun buy all the Cray technology? (Score:3, Interesting)
The rest of the company went to SGI.
So basically the server/sparc division went to SUN and then they got the technology for their Enterprise systems.
The rest of the supercomputer (the Alpha based and the Vector based units) units went to SGI, which did.... nothing with them. Oh, yeah they named some interconnections as CRAYlink or something, but they had 0 CRAY technology on them, they j
Ha! Wall Street has more confidence in SCO (Score:4, Interesting)
Re:Ha! Wall Street has more confidence in SCO (Score:4, Funny)
Look at me, I'm a stock analyst!
Icon is back (Score:2, Insightful)
Re:Icon is back (Score:3, Funny)
Imagine a beo...
Re:Icon is back (Score:2)
And that's only the ones we know of.
The NSA probably thinks top500.org is rather amusing.
Re:Icon is back (Score:2, Interesting)
Re:Icon is back (Score:5, Interesting)
I remember a story from a NSA contract worker.
In the early days of Cray, he and many others were wondering how they could keep things running, considering that their official budgets only showed ten or so sales per year.
Until he got the tour of the NSA computer plant, where they had a hall the size of two football fields, filled with Crays.
Re:Icon is back (Score:2)
I would like people to stop calling clusters supers
Re:Icon is back (Score:2)
That's because my home computer is a Cray X-MP.
Petaflops by 2010? (Score:5, Funny)
equipped with an opto-quantic Emotion Engine VI
and a couple petabytes of holographic storage.
Definately (Score:4, Informative)
So definately, time for Cray to come back and retake the supercomputer industry crown.
2010? (Score:5, Funny)
I had to literally step on their faces to get a Big Mac.
Re:2010? (Score:2, Funny)
So McDonalds is selling McFur burgers then?
Hmm... maybe we
I don't mind meat, but I generally draw the line at eating hair... ^_^
Re:2010? (Score:2)
If the Big Macs at that McDonald's have fur, I wouldn't want them, either.
Correct me if I'm wrong ... (Score:5, Insightful)
If you look at the list of top 100 supercomputers, there are systems that are almost 15 years old or even older (not sure on a few). I know these take years to build and are multibillion dollar projects, but between time has got to be a killer.
Then there's the question of ... what do you need a supercomputer for? The applications are pretty limited for a need for a petaflop computer, unless your doing mass storage, cryptography (cracking), or simulations.
Don't get me wrong I'm all about nuclear testing being done in 1's and 0's instead of in the ocean or in the desert, but how big of a bomb do you really need when it's estimated theres enough nukes to blast the entire land surface of the earth 3 times over.
Re:Correct me if I'm wrong ... (Score:5, Insightful)
To advance the state of the art. And not just in the field of computers, but also in any field that ends up benefitting from this. Which is potentially very many. Aerospace, geology, meterology... there are BUNCHES of fields that greatly benefit having more and more massively powerful computers. Sure, most projects can't afford to have the latest and greatest of the state of the art in supercomputing, but the fact that the state of the art progresses will push prices down on the older technologies that most labs CAN afford. This is a benefit for science as a whole.
Re:Correct me if I'm wrong ... (Score:5, Interesting)
There are other uses too. Consider: the weather guys that are working on the global warming and other climate modeling want a 500 petaflop sustained speed, massive memory machine to get the granularity that they want.
BTW, what's the 15 YO machine? I can't think of any...certainly not ones that are still in the Top 500. Hell, the ones I worked on 10 years ago, you can nearly buy the floppage on the desktop now...
As an interesting aside, the DARPA contract is out in part because they think the traditional drivers in computing speed are going to peter out around 2010...the implications of that are definitely interesting, no?
Re:Correct me if I'm wrong ... (Score:2)
Do you think that by then people will have stopped compiling their Christmas lists in Access?
Re:Correct me if I'm wrong ... (Score:5, Interesting)
Re:Correct me if I'm wrong ... (Score:5, Funny)
You're missing the big picture...
Massive multiplayer Quake on a 614,400 x 819,200 screen.
Thank you Cray.
Re:Correct me if I'm wrong ... (Score:5, Interesting)
Well, the earth is over 2/3rds covered with water, and now we have the technology to reach the moon, mars, venus and beyond. Remember the spectical when a comet hit Jupiter? Just imagine a Beowulf of those, but really big nukes instead
On a more serious and less morbid note, I bet some other uses exist in physics, medicine and even cosmology. I even hear where they compare 'potential' cures for diseases using computer modeling to design drugs that we don't yet know how to make, good old biotech. You are correct that yes, this IS a very very limited market, but when you sell them for a billion bucks each, you don't need to match Dell's volume to make a profit. I wouldn't be suprised if the technology leads to some advancements in our pitiful micro world as well.
Re:Correct me if I'm wrong ... (Score:3, Interesting)
Well, exaggerating to make a point perhaps. I checked out their website a
It's also about better (not just faster) computing (Score:4, Insightful)
Re:It's also about better (not just faster) comput (Score:2)
Don't just think about solving a static problem faster, it's also about solving a problem better through the use of more variables.
You're right and all, but the pedant in me can't help but point out that it's not necessarily sucha qualitative difference as you suggest. First, pick the numebr of variables that you want to use. Now, it's a static problem, and the only difference between two machines is how fast each one will solve it.
On th other hand, you've got a good point, in that the difference can
Re:Correct me if I'm wrong ... (Score:5, Informative)
From this site [top500.org], you can see the breakdown by organization: There are a lot of companies that use supercomputers, although maybe not the type you're thinking of. Of course, there are the number-crunchers: oil companies are big users (to crunch data & find new oil), and car companies (BMW). But there are also the transaction-processors, like SprintPCS and Ebay (used to be in the top 500), that make the list just by the sheer number of connected processors.
Here's the latest list [top500.org]
Classified? Re:Correct me if I'm wrong ... (Score:5, Insightful)
Especially because it's so much easier to hide a computer than an airplane. No sightings in area 51....
We have to assume that the state of the art is way past the public data. Cray has a "lousy" $150 MM in yearly revenue. They could be spending 10X that on heavy computing for national security. The government is spending $25BB on intelligence and another $400 BB on defense every year. Cray could be a drop in the bucket, even a red herring. I'd love to know what is going on in the basements at Fort Meade.
Re:Correct me if I'm wrong ... (Score:2)
Actually, this sort of machine would be a total waste for mass data storage. On the other hand, there are a great many private sector uses for this sort of machine. Ford for instance runs a number of their crash test simulations on Cray vector machines.
Resurrection, not come back (Score:2, Insightful)
and why not... (Score:2)
the high end (Score:2)
Dude, you're getting a Cray! (Score:2)
Sun Enterprise 10000 (Score:3, Informative)
Re:Sun Enterprise 10000 (Score:2)
Re:Sun Enterprise 10000 (Score:3, Funny)
Re:Sun Enterprise 10000 (Score:5, Informative)
But it's still the same core team down in San Diego, so I like to think of the E10000 as being a Celerity product.
Re:Sun Enterprise 10000 (Score:2)
Re:Sun Enterprise 10000 (Score:2)
I always heard the E10000 was actually a Cray product.
I believe that the origin of the E10000 was from a company called Floating Point Systems. They were working with Sun to develop a SPARC based parallel processing computer. If I remember correctly, Cray bought FPS, and later SGI bought Cray. Since the FPS machine was SPARC based, and since SGI wasn't interested in SPARC, they sold it to Sun.
So in a roundabout way, the E10000 came from Cray, but I think it was mostly from the original Floating Poi
Re:Sun Enterprise 10000 (Score:2, Informative)
How strange... (Score:2)
Tera's website (Score:2)
The requested URL / was not found on this server.
Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request.
Apache/1.3.26 Server at www.tera.com Port 80
---
Looks like they need to host their website on a SUPERcomputer to handle a Slashdotting! (Noooooobody expects a Slashdotting! [slashdot.org]
Cascade Link: Karma Whoring (Score:3, Informative)
The home page [cray.com] at Cray for the Cascade project.
There are some interesting PDFs there. Chew, mull, and consider.
Also consider what Horst Simon, head of NERSC [nersc.gov] said here [hoise.com] too.
That's good (Score:2, Funny)
I'm confused... (Score:3, Funny)
Re:I'm confused... (Score:4, Funny)
(I'm a Mac user. I get to make this joke.)
Re:I'm confused... (Score:4, Funny)
Re:I'm confused... (Score:2)
Yeah, that and dual-proc Xeons.
Before we all get sentimental... (Score:4, Interesting)
Re:Before we all get sentimental... (Score:2)
Comparison of supercomputers to desktops (Score:3, Interesting)
One of the comparisons made when I was at university was of a 30-something MHz 386, with a supercomputer from 1973, showing how they do about the same amount of processing/data transfer but in completely different ways. I found that fascinating
Comeback? (Score:5, Insightful)
But, you ask, isn't there a lunatic fringe who wants more power at any price? Well, the lunatic fringe ain't what it used to be. During the heyday of cray you got a damn fine box and nothing else. Cray didn't want to worry about your software--or even an OS. A person who needed the speed would plunk down the money for the box and then pay a couple of guys to code everything from scratch. Those days are gone--software is the driving factor these days, and people are far less willing to buy something that's going to force a total code rewrite. Especially if that thing is only going to buy them a couple of years of edge before they need to recode for the next best thing.
Then there's the question of whether cray can afford to be bigger. The answer is "probably not". If you sell to a lot of customers you need a huge support infrastructure. Cray doesn't have much of one anymore, so they'd need to buy one. (Most of the old support guys left one way or another when SGI came in, or stayed with SGI.) If you have a lot of customers you can spread the costs around, but in the case of a company like cray a support infrastructure means having a people sitting around most of the time in every region you sell a machine. Maybe two to four guys per system (24x7, right?) plus some sorta warehouse facility if you enter a new geographical market. That's expensive. You can bill a lot of that cost back to the customers, but that just makes your systems less competetive.
I think the long term answer is that cray will be a very small niche player, selling to a very select group of (U.S.) government agencies, with the occasional pro forma business customer thrown in so the company can issue press releases. Even most government facilities aren't in a position to buy a cray anymore. (Research money is fairly tight, recoding costs are prohibative, MTBF's are more of an issue then they used to be, etc.)
Re:Comeback? (Score:4, Interesting)
My thinking, however, is that the same is true today and for all of the top 100 supercomputers in the world. That is to say, each one of those machines is a custom hardware installation, and my educated guess is that software still isn't the driving force in the supercomputing market. Rather, algorithms are the driving force. The supercomputer market is geared towards people who want to very specific tasks, very acurately, and very fast. Example applications might be calculating fourier transforms (spectroscopic analysis), mendelbrot sets (weather simulations), prime numbers (cryptography), and statistical derivatives (markets). Any of these types of applications could feasibly require only a few thousand lines of code... At the same time, however, any of these applications are fully capable of utilizing as much hardware resources as you have available...
The problem is the magnitude at which these few lines of code need to be repeated. Furthermore, each of these types of algorithms can give qualitatively different and more robust results at each order of magnitude increase in speed... thereby creating a driving market force for upgrades.... We have a computer that can predict the weather 48 hours from now? Well, give us a computer that's 10 times as powerful, and we'll predict it 56 hours from now... Give us one 100 times more powerfull, and we'll predict the weather 62 hours from now, and so on, and so on... The point I'm trying to make is that the software isn't the driving force behind these supercomputers... the algorithms are... and the optimized hardware is what the organizations are paying hard cash for, in order to calculate those algorithms fastest.
Remember, we're talking about supercomputers here... we're certainly not talking about super-electronic-typewriters, super-spreadsheet-applications, super-databases, super-webservers, super-videoeditors, etc. etc. Nor are we necessarily talking about super-von-neuman machines, super-turring-machines, or super-mainframes. We're talking about supercomputing and the Cray corporation... the company historically responsible for building the machines which simluated the weather and nuclear explosions for many years... I suspect that there are not many end users of such machines and that user interface software is kept at a minimum...
But, I'm not a physics or computer science major, so what do I know... That, and I'm beginning to ramble... just my $0.02 worth...
Re:Comeback? (Score:5, Insightful)
Yes and no. The problem is that a cray box has to cover the whole R&D cost for an entire system. When IBM sells you an SP2 most of the R&D is spread across their much higher volume business lines. Same with an intel based cluster--the technology specific to the HPC market is basically the interconnect, and the rest is subsidized by video game players. There's also the compiler cost (you don't sell many fortran compilers outside the scientific market) but the salaries for a few compiler writers is much lower than the cost of desiging a cutting-edge cpu from scratch.
That's always true. The question is whether they can use the resources efficiently, and whether the cost/op is competetive. You're right about the algorithms being the driving force, but I'd argue that it is unusual for an algorithm that's optimized for one architecture to run optimally if you move it a radically different architecture. People can spend years trying to squeeze a couple more percent out of their code, and they don't want to start from scratch unless there's a very good reason. Then there's the problem that researchers tend to not work in a bubble. Even if you can afford to buy the most expensive machine on the block you might end up shooting yourself in the foot if nobody else in your field can collaborate with you.
You've got that right--most of the examples I've seen are pretty...spartan.
Re:Comeback? (Score:5, Insightful)
Cray has never sold computers that are anything like a normal company would need. Cray machines are made for heavy number crunching - Vector processors are made for simulation tasks. They're very good at them. However they perform abyssmally at most other tasks - buying one for use as say, a database or application server would be stupid.
But, you ask, isn't there a lunatic fringe who wants more power at any price? Well, the lunatic fringe ain't what it used to be. During the heyday of cray you got a damn fine box and nothing else. Cray didn't want to worry about your software--or even an OS.
Last time I checked Cray shipped UNICOS with their machines. It's a fairly BSDish UNIX variant. It's a bit of an oddball, but not all that much more of a PITA than say, IRIX or AIX. Want to port your beowulf apps? No problem! When I spent a summer working on a T3E all of our multi processor apps used MPI. Vectorization of C and FORTRAN apps is largely taken care of by the compiler. So wheres all this programmer investment you're talking about? Most of the kinds of apps that you're going to run on a Cray (Weather models, crash simulations, Gaussian for chemical sims, etc) already run on a Cray, and you're probably going to be modifying them anyway.
I think the long term answer is that cray will be a very small niche player, selling to a very select group of (U.S.) government agencies, with the occasional pro forma business customer thrown in so the company can issue press releases. Even most government facilities aren't in a position to buy a cray anymore. (Research money is fairly tight, recoding costs are prohibative, MTBF's are more of an issue then they used to be, etc.)
Cray isn't in the selling large business systems. Cray is, always has been, and likely always will be a competitor in the scientific computing market. Yeah, this means they're not going to be a Sun or IBM that sell to business customers for business needs, but that's not the sort of company they're trying to be so the comparison is pointless. They're selling machines to people who need to do heavy duty number crunching. This means Universities, government agencies and large companies doing lots of product research. Typically the cost of using these sorts of machines is spread around - frequently instead of buying the machine, you'll go to a company like Network Computing Services and buy time on a machine. It works out well. There will always be a certain number of organizations that need this sort of heavy duty computing power, and Cray will be there to serve them.
Re:Comeback? (Score:4, Insightful)
I don't recall saying that cray was trying to sell general business machines. But even for scientific applications, the number of customers who need a cray as opposed to being able to use a commodity cluster is much lower then the number who needed a cray instead of an IBM 360. There are businesses out there who use computers for more then spreadsheets and web servers. By "ordinary company" I meant to draw attention to that part of the market whose budget isn't classified.
I guess you didn't do much porting of mainstream applications to a cray. The lack of virtual memory, the funny type sizes in C, and other things that application writers make assumptions about (things that aren't technically guaranteed to work in ANSI C but do work on every other system in the world) could make porting a real problem. Things have gotten a lot better, but I can assure you that a unicos port of, say, perl or gcc was not in the same league as an irix port of the same app. One of the things cray is finally bowing to is the demand for virtual memory. Seymour never wanted it (didn't want the performance hit) but it's real hard to sell that in today's marketplace. The question is how much cray can back off of its old "speed is king" philosophy when their whole business is making fast computers.
You've kinda missed the boat. The point of the cutting-edge cray supercomputers isn't to run mpi apps--those do quite nicely on commodity clusters. The T3E is a MPP super--not a vector super. It's where cray was 10+ years ago, not where they want to be tomorrow. The point of cutting edge is to create new paradigms. That definately helps your performance, but it kills your compatibility.
Wow. Let's just say that when you're on the kind of project that can command the state of the art you don't depend on compiler autoparallelization.
Please, read up on the tera system, for example, and try to understand how it's different from a T3E.
Re:Comeback? (Score:2)
It's been a while since Robert Cray had a CD! (Score:2, Funny)
Oh!.... that Cray!
Never mind!Oh come on now, slashdot. (Score:2, Offtopic)
Can't a guy count on slashdot for anything anymore?
--
Gimme (Score:5, Funny)
Economics of Scale (Score:5, Informative)
In the 1970's and 1980's, Cray and other supercomputer companies fit in the niche of "fastest computing at any cost". The design cycles were long for the specialized hardware that pushed the boundaries of the available technology. Companies and government agencies were willing to pay the high price since there was enough processing speed difference between the supercomputers and the "vanilla" computers.
By the early 1990's, the "attack of the killer microprocessors" came. The PC class processors were still weak, but the higher dollar RISC processors used in workstations, like Sun, were reaching performance levels close to what the supercomputers were able to deliver. Since they were based on higher volume and more standardized processors, the price/performance of the RISC workstations started eating into the mainframe and supercomputer market. Many of the supercomputer companies died off, and some started to incorporate RISC processors into their designs. By the mid 1990's I believe that Tera and Cray were the last remaining old-school supercomputer companies left. The rest either died or were absorbed into other companies.
Today, the investment required to produce the fastest processor chips is so high that it requires large unit volumes to pay for the cost of development and production. The PC class processors, with their high volumes, are putting pressure on the old style workstation market, where each company makes their own processor (SPARC/Sun, PA-RISC/HP, Alpha/DEC). We see Sun struggling as the PC's eat their market. Even some large scale supercomputers are based on the PC processors. The majority of the computer spectrum from low to high end is based on the same families of processors (Intel, AMD, PowerPC).
So that brings us to Cray/Tera. Cray seems to go against the economics of scale that drive the rest of the computing industry. What keeps them running is a small niche that the government is willing to keep funded. It is similar to the funding of exotic bombers and fighter jets. We probably won't see Cray grow much larger than they currently are. They be kept running since they form a critical part of the national security, at least that is what the government believes.
So when is a desktop version comming out ;) (Score:2)
The trick is keeping ahead of the commodity guys (Score:5, Interesting)
In the 70's up until the early 90's it was possible to build a custom CPU out of discrete logic that ran significantly faster than the available microprocessors. Cray was able to push their clock cycle down into the nanosecond range through clever design. However, a 1ns clock rate == 1GHz. You can go buy that multi-million dollar CPU for a couple of hundred bucks in today's market.
In order for superocmputing to be viable you have to be able to provide quantum leap performance above the commodity hardware AND keep your cost/performance ratio in line as well.
The CRAY-1 came out with a clock speed of about 80 MHz and vector processing and high memory bandwidth at a time when mainstream systems like the PDP 11/70 were running at about 7MHz with a 1MB/s memory bus. Microprocessors weren't even't a joke compared with the Cray.
The new Japanese NEC supercomputer came with a price tag of about $160 million if I remember correctly (some estimates say that it took $1G in research funding) and hits 35 TFlops (sustained). #3 on the Top 500 supercomputers list is a Beowulf cluster with 2304 processors coming in at 7.6 TFlops (sustained). Even figuring $2000/processor + interconnect, that puts the Beowulf cluster at around $5 million or 1/32 of the cost for 1/5th of the performance (roughly speaking).
There are other factors, of course, but the key is that for the supercomputer to stay ahead of the microprocessor a boatload of funding is needed for the supercomputer and the payoff just isn't really there. If it was a lot more supercomputer companies would still be in business.
Re:The trick is keeping ahead of the commodity guy (Score:3, Insightful)
Number of TFLOPS isn't everything.
looks like Cray is going with the Opteron (Score:5, Informative)
More elegant than the macs, back in the day (Score:2, Insightful)
Ah, glory days.
Re:More elegant than the macs, back in the day (Score:3, Funny)
Are you saying the Cray has an extra mouse button?
Seymour Cray's Legacy (Score:3, Interesting)
SRC Computers [srccomp.com] is his legacy, not Cray Computer Corp.
He co-founded this company (with several other
ex-Cray employees) and died while still an employee/owner.
Interestingly, SRC is still around without any evidence on their website
of shipping a product. My guess is that their customers and/or investors
prefer to stay out of the limelight.
Re:Seymour Cray's Legacy (Score:4, Interesting)
The paper claims in its conclusion a speedup of ~800 (for DES encrpytion) and ~1600 times (for DES breaking) over C code for the P4.
I wonder who would be interested in that?!
ASCI(I) (Score:2)
Cray Comeback? Desktop Cray! (Score:4, Informative)
disregard story, its more markoff fodder (Score:4, Interesting)
Gallium Arsenide's Day Will Come (Score:3, Interesting)
However it happens, it is unlikely Cray was wrong about Gallium Arsenide -- he was not stupid. The question is when will a bureaucratic organization be able to throw marching morons at the problem and make it happen -- since that appears to be the only way technology is funded anymore.
It's unfortunate Seymour allowed Cray, Inc. to keep his name after he left to found CCC. Even though Cray himself was capitulating to massively parallel silicon in his final days -- he did die almost immediately thereafter.
PS: It seems creepy he died in a "jeeping accident" -- because that's exactly the way I had portrayed him dying in an April fools joke faxed to all members of congress a few years before -- an "accident" following shortly on the heels of CCC being taken over by Craig Fields of DARPA. I was sending out the joke because of the horrifying way DARPA had spent money on silly favorites within the academic community while guys who were really pushing the envelope like Seymour were going begging for customers -- having acquired private investments.
HEAT is the reason CRAY can come back (Score:3, Interesting)
Commodity PCs managed to push the speed envelope by pushing the heat envelope... That's the main reason AMD took the speed advantage, because they were willing to operate their processors at higher temperatures than Intel would at the time.
Now, I would say it's quite a different story. First off, processors are getting closer and closer to the end of the line for heat increases.. Pretty soon, no known metal will be able to conduct heat away fast enough to allow computers to operate at room-temperatures. Even now, dumb little personal computers need serious cooling solutions... Either that, or they need to be some place that has serious air conditioning.
So, what are companies going to do, even with the current line of processors? Should they invest loads of money in dispersing waste heat, powerful air conditioners, system cooling fans, and software and/or hardware to closely monitor temperatures? OR Should they invest in a higher-end system that doesn't put off so much heat, doesn't use up so much electricity, etc?
In fact, I think we are even nearing the point where home users are going to get seriously pissed off and start demanding lower-power systems... It's interesting that C3 processors have become so popular despite their lowsy perfomance... (Maybe AMD/Intel will learn something from that)
So, I do think that either commodity processors will hit the heat ceiling, and stagnate like the rotational speeds of current IDE hard drives, OR the electrical and major cooling requirements of commodity processors will become too much to justify the small price savings. Either way, that will leave the market wide open for serious computing companies once again. The only question really is how much longer will it be until one of those two things happens? Well, in the Southern California Desert, electricty prices are still very high, and the temperatures are so very high that running a modern computer 24 hours a day requires your home cooling to also be running 24 hours a day, just to operate within the heat tolerances. I don't think it will be much longer before more of the country, and the world, will reach the same point.
Re:explain (Score:4, Informative)
Re:explain (Score:4, Funny)
Re:explain (Score:2, Informative)
Re:explain (Score:5, Informative)
And if your supercomputer has multiple processors, they are generally made to cooperate nicely to speed efficiency. Whereas a cluster has to go through ethernet and hardware layers to communicate between nodes. Granted that is fast, but on-board communication is faster.
It seems strange, but a multiple processor computer can actually perform a task slower than just one processor working on the problem if the program and os aren't designed well. So a lot of the value of a supercomputer comes in its design, and the reputation of the manufacturer. And Cray is pretty reliable in my book.
But the REAL key to the potential comeback of the Cray computer will be whether or not it still has cool bubbles! Wow!!! Cray computing... the inventor of case mods.
Re:explain (Score:4, Funny)
I like the way Cray put it:
"If you were plowing a field, which would you rather use? Two strong oxen or 1024 chickens?"
- Seymour Cray (1925-1996), father of supercomputing
And how about a few more Cray quotes?
"#3 pencils and quadrille pads."
- Seymoure Cray (1925-1996) when asked what CAD tools he used to design the Cray I supercomputer; he also recommended using the back side of the pages so that the lines were not so dominant.
"I just bought a Mac to help me design the next Cray."
- Seymoure Cray (1925-1996) when was informed that Apple Inc. had recently bought a Cray supercomputer to help them design the next Mac.
I wonder what he's using now? a Palmpilot?
Re:explain (Score:5, Funny)
Re:explain (Score:4, Funny)
Re:explain (Score:5, Interesting)
My problem is that I have to move 1000 people from NY to London
Now I can either:
1. I can buy a plane that is 20 time faster, 20 times more expensive. That's the supercomputer
2. I can buy 9 other planes (same as mine) and accomodate the same results as in 1 for less than half the price (I'll let you do the math). That's the cluster.
3. I can buy a plane that has a capacity of 1000 people. That's the parallel supercomputer. But if that one can do the deal for my specific problem, it proves to be not that flexible if my problem changes (ie: 500 people NY->London and 500 people from NY->LA).
That's the power of the bewolf cluster!!!
Re:explain (Score:3, Insightful)
The Cray design philosophy is for solving problems that can't be split up easily. If all of the parts of the problem depend heavily on one another, you pay a large price for communication when you split it up. That's the situation where the cluster doesn't do as well as the Cray. So each des
Re:explain (Score:3, Interesting)
In this case option 3 makes sense.
You could say that the 6 hours is a reasonnable limit but sometimes (not predictable) you need as many people as you can in England before (amound of time not predictable either). In this case, option 1 make sense because both options 2 and 3
Re:explain (Score:3, Interesting)
you couldn't surgically separate them
How do you stuff them in the plane then?
A good constraint for option 1 would be that you need to have them ASAP and the overall transfer could be interrupted anytime (before the 6th hour) and at that at that time you still want as much people as possible. Le
Re:explain (Score:5, Informative)
Memory to processor feeding: std ots processors are often idle because the memory subsystem cannot feed the processor fast enough. This is bad now. It will be getting a lot worse.
Interconnections between processors: this goes beyond merely processors on a board, but between boxes. The bus architectures out there for the std ots hardware get saturated very quickly. This gets worse between boxes. In addition the latency on Myranet and Quadrics (compared to what Cray et al do) is horrible even if it is excellent compared to ethernet.
Problem set vs architecture: Not all problems map out well to clusters, or even SMP boxen. Some map best to vector machines. Some map best to tightly integrated MPPs. Some map out to moderately tight clusters. Some are just plain 'embarassingly parallel'. Others are highly threaded and don't work well on vector or scalar machines. etc, etc. The architecture ought to match the problem set.
MTBF: Mean time between failures. Commodity hardware goes kaputt much more often. A cluster capable of teraflop performance of custom hardware tends to need constant and evil levels of care and feeding: ie you better have a grad student on roller blades.
Those are just off the top of my head. I am sure that others will Tell you others before I can post again. ;)
Summarized: bandwidth, latency, problem set, and failure rate.
HTH.
Re:explain (Score:5, Interesting)
Hahahaha. Have you ever actually run a supercomputer? They tend to have much higher failure rates then normal servers. Couple of reasons: first, they push the envelope of a given technology. The sweet spot for stability is not the leading edge. Second, they're not nearly as well tested as mainstream hardware. On a platform with thousands of installations you're much less likely to run into a problem nobody has seen before than you are on a platform with only dozens of installations.
Re:explain (Score:3, Interesting)
Have you ever actually run a supercomputer?
You know, that's kinda funny, since it's my current job. ;) I'm a NERSC employee. :P
You're right, until the the system hits maturity. Our T3E before being retired had a lot less hardware problems than our linux cluster does. Or the SP3 we have for that matter.
BTW, since it's rather hard to find a job these days for some people in the computing realm, we're hiring [slashdot.org].
Re:explain (Score:4, Interesting)
Re:explain (Score:2)
time to market in a market where delay means that the peasents will be nipping at your heels
Bloody peasants!
Actually, I hear ya. The T3E did have some horrible hardware problems in the beginning. In the end, it was vastly more stable. We could run for a long, long time w/o problems. However, the SP3 we have has problems even now. IDK if IBM will ever get the bugs ironed out with this and related architectures...:S Just IMNSHO. ;)
Or maybe.... (Score:2)
So you'd expect the CPU in your computer to fry every 10 years or so, if you kept it that long.
The reason you have more issues with multi-processor supercomputers is that.. gasp.. you have MORE PROCESSORS.
Put 1,000 processors in a machine, and intead of 1 failure every 10 years, you get one failure every 3.65 days. And that's just CPUs.
Re:Or maybe.... (Score:3, Insightful)
Re:explain (Score:5, Informative)
Second, (yes, I work for Cray so now I'm going to put in a sales pitch
Finally, there's memory. Lots of it. A single system image supercomputer can have terabytes of memory in one kernel image. You're simply not going to get that in a single PC cabinet.
Finally, in case anyone doubts that vectors, big memory, and large bandwidth can make a good system, the fastest machine in the world right now is the Japanese "Earth Simulator" machine which is an NEC SX machine. That is somewhat similar in architecture to a Cray in that it has large bandwidth and vectors.
Re:explain (Score:2)
Vector processors are much faster, I know, but are they so faster to justify the increased pr
Re:explain (Score:5, Insightful)
Sounds like Cray marketing articles. For example, Daniel Katz at JPL wrote in 1997 [nasa.gov]:
which is > 35% of peak. Or consider this [liv.ac.uk] from the Universiry of Liverpool:For sustained/peak of about 60%.
I have no doubt that one could find problems where a Beowulf cluster has 10% efficiency, but there are real many problems that are good to go on a cluster. And even if you only got 10% it would be worth it if the cluster cost 5% of what a vector computer costs. Not to mention that performance/$ on commodity hardware increases by a factor of 2 every 12-24 months. It takes years to develop a supercomputer, and they are stuck at their level of technology for several years since they are so expensive to redesign.
Re:explain (Score:4, Insightful)
Other posters have already pointed out the bandwidth issues over and over, so I'll skip that obvious difference.
The fact is that not all problems are suitable to parallel processing. Sometimes you really need to know the outcome of one operation before you can go on to the next.
Beowulf clusters really suck on problems where that applies. Cray style supercomputers shine on them.
Re:explain (Score:2)
PCs rule. Why would anyone want to use anything else? This is so beyond my experience I have no comprehension of real computers. I can't even fathom the need for such a machine.
Re:I could use a DARPA contract... i need CASH! (Score:2)