Update From Cray World 108
rchatterjee writes "Cray, the only mainstream recognizeable name in supercomputing, has been busy lately. Their totally new MTA-2 supercomputer design will use a UltraSPARC-III powered Sun Fire 6800 server to just feed the data to the MTA-2's processor. They're also refocussing on Vector Supercomputers and are going to release their first new vector supercomputer since Tera Computing bought them, the SV-2 in 2002. And if that wasn't enough they have a deal with API networks to develop Alpha processor based Beowulf clusters of Linux machines that as a cluster will run the same operating system as Cray's T3E supercomputers. Seymour Cray would be proud. You can get a quick overview of all the latest Cray developments from this article on Cnet."
Imagine... (Score:3)
Oh wait, nevermind
Woohoo! (Score:2)
Seriously though, it is great to see that they are finally gearing up some new designs, and I cannot wait to see some of the performance specs on these.
A computer is not a computer unluss it takes up at least 40 cubic feet
Doesn't hold a candle. (Score:2)
tcd004
Real contenders now (Score:2)
Proud? (Score:3)
If Cray was alive today, I like to think that he'd be directing research into quantum computers, and maybe technologies like Starbridge Systems is working on.
Hold it! (Score:4)
OH YA!! (Score:1)
Re:What's it good for if your friends don't have o (Score:2)
Don't get me wrong here, Cray's putting out some remarkable new hardware, but there's no point in spending millions of dollars on a machine that will be as powerful as an average desktop in five years time.
Not only that, but the IDEA itself of supercomputing has become obsolete within the past few years. With recent advances in distributed processing and Beowulf clustering, anyone with a bunch of old 486's laying around can combine their power to process more data than a Cray could ever dream of.
SETI@home is a perfect example of MASSIVE amounts of processing being done by many small, inexpensive computers working in unison.
Google [google.com] also uses clustered computers to provide the horsepower behind their search engine. In fact, it is believed that Google operates the largest Linux cluster in the world. Many of their computers are literally junkers. They've got dozens of 286, 386, and old Sparc's working together to provide an EXTREMELY powerful search engine.
Re:What's it good for if your friends don't have o (Score:2)
This must be a troll....
Now if only... (Score:1)
...I could get the rights to www.cray.com, then I would be happy.
A good reason not to have a large powerful computer company name in your alias. Good luck getting a domain.
Cray
well (Score:1)
Re:What's it good for if your friends don't have o (Score:2)
We? What are you talking about? Who here has even had any input into the purchasing of a super-computer? If it wasn't for your relatively tame posting history, I'd say you were a troll...
People buy a super-computer for one purpose - raw computing power. Not optimized for interconnectivity, not to conform to standards, not to run Linux or other system of choice, not to play Quake (although I'd like to see the benchmarks).
You buy it not because it's a good deal, but because you have research bucks to burn, and want the best money can buy. If you are even asking about cost or a down-the-road upgrade to a different platform, you are probably not looking for a super-computer.
It's like saying, maybe the Air Force should give up on specially designed fighter planes, and see if it would be a better idea to convert 747s or an Airbus model. Imagine the cost savings in spare parts!
Re:What's it good for if your friends don't have o (Score:1)
battle begins (Score:2)
nude chix [antioffline.com]
Re:Proud? (Score:3)
I agree with you that Cray was not only about pushing the limits of technology--he was working on the Gallium Arsenide Cray-5 at the time of this death--but also about innovation in computer architecture.
A great example is the CDC 6600, his first parallel computer for Control Data Corporation. It had many innovations that only later came into popular use. It was a parallel processor, essentially a pipelined machine. It had a pure register load/store architecture, with a hardwired zero register, similar to many future RISC designs. There are many more, but I gotta run....
Re:What's it good for if your friends don't have o (Score:2)
A good one, though - only a couple of red flags - he could have left out the Microsoft stuff and still got me.
It might even be the dreaded double-irony troll - what is the point of a bunch of Slashdotters commenting on developments in Cray super-computers? Inside knowledge? "Maybe it will run Linux?" "They ran Win2000, and now it is as fast as a 386! LOL!"
As I said, truly an effective troll - no one has much to comment on, but he throws in a catalyst to make people comment, anyway.
Re:What's it good for if your friends don't have o (Score:2)
not to play Quake
Taken in conjunction with this comment [slashdot.org] entitles you to some elective surgery.
You've still got one, though.
--
Actually Seymour would be disappointed... (Score:4)
The original Crays (and CDC-6x00 machines) were crafted to take advantage of the efficiencies of the speed of light in their busses and memory configurations (i.e., cable lengths cut to sub-millimeter tolerances). These new "crays" are crays in name only... they lack the creative zeal that made "real" crays the exciting machines that they were and that launched the supercomputing industry...
These are just glorified SMP machines...
Don't get me wrong, clusters are great (hey, I even wrote a book on them!), the CPU speeds we have now are beyond my wildest dreams, but there will never be another Cray...
Imagine... (Score:2)
Bzzt! Cray is actually quite busy these days (Score:3)
There are _many_ things going on behind the scenes at Cray that show that Cray is once again trying to push the supercomputing envelope as far as they can. One way to look at the SV2 is as a T3E with large vector units in each CPU (no e-registers) and a nearly flat (shared) memory space across all processors. Thus, no need for mixed mode (MPI and OpenMP) programming like on IBM SP-like architectures.
Re:What's it good for if your friends don't have o (Score:2)
The distributed/clustered model is great for some problems - lots of completely independent data processing (like SETI, or Google) works great. You start moving into a realm where you're doing scientific simulations where all the calculations are interdependent, then a Beowulf cluster (even with some good interconnect) hold a scalability candle to, oh, say, a Cray T3E. There's a good reason that the Cray T3E and SV1 won the "Co-Supercomputer product of the year award" this year, as handed out by the people who use them.
Re:fp (Score:1)
Re:well (Score:2)
Re:What's it good for if your friends don't have o (Score:1)
not to play Quake
Taken in conjunction with this comment [slashdot.org] entitles you to some elective surgery.
Funny. Someone else beat me to the first Quake post, but I may be the first non-anonymous guy. And I lost one just last week for an AYBABTU reference...
Now, where's the AYCABTU reference?
Re:What's it good for if your friends don't have o (Score:1)
Re:Imagine... (Score:1)
Maybe if you posted less as an A/C you'd get moderator someday and you could do something about it, untill then, enjoy the Ozzy and shut your mouth.
Makes me nostalgic (Score:1)
Ah, yes, the knitting guild. You young whippersnapper wouldn't remember back in the day, when every knitting guild worth its salt had at least a mainframe, and the big 'uns had supercomputer or two.
Makes me feel young just thinking about it.
Nowadays, uh-course, the knitting guilds are all run by youngsters, with their newfangled WinNT clusters and WinMe clients.
Man, we gotta get a rating of "-1 WTF?".
Quick overviews (Score:2)
Exactly how many overviews per second can I get?
Re:Actually Seymour would be disappointed... (Score:3)
So, yeah, Cray is branching into some lower markets, but they've finally escaped the creative dearth (at least on the high-end) that is SGI.
Questions about this Beowulf thing... (Score:1)
If I get all this right, the T3E operating system enables a programmer to be seen somehow more as a single computer (with less (no?) load balancing to do in the program itself and such). If I do get this right, and it's really done to a certain point, this would be really fascinating.
Does anyone know more about, for example, how T3E works (preferrably in simple terms
Re:Imagine... (Score:1)
I'm imagining, but, alas, it doesn't take as much imagination as Beowulf clusters of everyting else which has been posted on /. Some how the magik seems to have gone out of this one...
--
Re:Hold it! (Score:1)
Re:How are you gentlemen !! (Score:1)
Re:Where do their priorities lie? (Score:1)
It would be suicide to simply say to a customer "Hey, I realize that you may have written some buisness apps (more than a few mission critical I'm sure) for those AIX and OS/390 boxen, but if you haven't heard we're all gung ho on Linux now. We've stopped supporting / maintaining / updating all of those other OS's because they're not '733T enough. Port all of your software to Linux, you 'lusers.'"
If you don't have anything nice to say, say it often.
Re:Proud? (Score:1)
If Cray were alive today he'd be clawing the inside of his coffin.
Re:Questions about this Beowulf thing... (Score:1)
Sh*t. Again typing to fast. Before anybody complains... I wanted to say: the T3E operating system enables a programmer to see the Beowulf cluster somehow more as a single computer
And even then I don't know if this is correct English...
ILM (Score:2)
I can see the programmers and designers drooling now.
Check out the Vinny the Vampire [clik.to] comic strip
another battle... begins (Score:2)
Either way you cut the cake Sun still got in. What would you rather have had AMD, Intel make chips for a Cray? Give me a break. They could have gotten a better deal on chips from SGI who sold them Cray from the get go and thats a given, so what makes you think they didn't do specs on it as well?
"See? UltraIIIs are fast! You can stop making fun of how slow our chips are now! UltraIIIs are fast enough to be associated with (lots of hand waving) CRAY'S NEW SUPERCOMPUTER!" -- Sun marketing.
Stick to whatever your day job is. When it comes to purchasing equipment (which I have many of times) I would go for what I felt was best, based on benchmarks, industry usage, etc., not some marketing bullshit so give it a rest.
You sound upset that Sun has captured this segment of business from Cray, I'm happy for them as I am for most companies that do well. As for Cray its a bit outrageous unless your a government or major fortune500, but I do use, own, and plan on purchasing more Sun hardware including a SunBlade workstation for home use, regardless of what people think. Sure their expensive, but well worth it, and Alpha's just couldn't cut it all the time so what other options are there? Wait... Maybe I should sit back and wait for you to make a chip, hows that?
Re:SETI (Score:1)
Re:What's it good for if your friends don't have o (Score:2)
There is a big difference between a vector supercomputer and some random collection of microcomputers. It's bandwidth and the ability to efficiently handle large datasets. See the Stream Benchmark [virginia.edu].
Re:Hold it! (Score:2)
Actually, I wouldn't be too suprised if an Athlon or PIII could beat this machine at Quake frame rate (that is, if you could install a videocard on it...). This kind of computer is very, very fast at vector computations, but very slow for scalar... plus the fact a HUGE memory bandwidth, the latency is relativly poor.
work with linux on a Cray! (Score:1)
Re:What's it good for if your friends don't have o (Score:4)
Management: Which would you rather manage - 1024 seperate PC's, each with their own boot disk, hostname, power supply, etc. or a Cray T3E with a single system image, and one boot disk. Think about the time it would take to do an OS upgrade on the cluster.
Bandwidth: Stuff like Myrinet and Quadrix (sp?) is quite good but it still doesn't come near the bandwidth that you can get on a traditional supercomputer. Google and SETI@home are *horrible* examples of real scientific code because they do almost no internode communication. We can get 1.6 gigabytes/second full duplex between nodes on our Origin 3000 product. T3E gets even more than that.
Latency: The time it takes to get from node A to node B matters *a lot* with real code. Again, SETI and Google don't care if it take 100 microseconds instead of 4 to exchange data. When you are exchanging lots of data and synchronizing with many other nodes, this matters. Many massively parallel jobs spend large percentages (like 25%) of their time doing communication. A lot of this is very small messages.
Quality: Usually, you get better components when you buy a supercomputer than a PC. Does this matter for you? Probably not. If you are trying to predict where a tornado is going to touch down, you're going to be a lot more interested in whether the machine is running.
Ease of coding: It is a lot easier to use a model of coding called OpenMP, which relies heavily on shared memory between threads, than MPI in which you have to explicitely call for communication between threads to happen. OpenMP runs best on large SSI supercomputers.
Now don't get me wrong - there are many applications for which a cluster is sufficient. This doesn't mean there is no room for supercomputers. Besides, if you look at the direction Quadrix, Myrinet, and the new Infiniband stuff is going, they are going to end up looking a lot like a shared memory supercomputer....
Re:battle begins (Score:2)
Re:Imagine... (Score:2)
Re:Bzzt! Cray is actually quite busy these days (Score:2)
Re:battle begins (Score:2)
Re:work with linux on a Cray! (Score:2)
Re:What's it good for if your friends don't have o (Score:1)
Re:What's it good for if your friends don't have o (Score:2)
Re:What's it good for if your friends don't have o (Score:3)
Also, would it shock you to let you know that Cray machines have TCP/IP stacks, and ethernet ports, and all that? They don't have video cards, so you have to connect to them somehow...
Re:Questions about this Beowulf thing... (Score:2)
Re:How are you gentlemen !! (Score:1)
Anyway, a professor in grad school seven years ago told me Cray had all but died, cancelling their latest processor, because the fastest RAM they had back then would still cause their processor to idle for 350 cycles just to load one value.
Intelligent compilers at best can only use that wasted time performing maybe 10 instructions ahead, not the 350+ required of a superscalar vector computer. Hence, the processor was basically useless, starved of information. Project cancelled. Cray on Skid Row.
I presume they have licked this RAM-to-processor problem?
But what is the market? (Score:2)
On the supercomputer front the game has changed radically since the original Crays. The Cray 1 was a SIMD machine, one instruction stream controlling multiple processors. That architecture works well for a limited number of problems, the problem is that most problems turn out to have bits of vector code interspersed with decision code. If you have 10% of your code that cannot be parallelized then even an infinite number of processors can only go 10 times faster than a single processor.
The attraction of vector boxes was that there was no need to recode the FORTRAN application, the compiler could detect the parts of the code that could be parallelized and optimize the code. The problem is that there are limits to what the automatic parallelization can do.
The upshot is that there tends to be little advantage in more than 8 or 16 processors in a vector box. Meanwhile a standard Pentium IV has multiple independent processing pipelines - I forget the number (4 maybe). So the gap between the cray box and the mainstream may not be amazing.
At this stage most of the problems in science can be attacked using MIMD architectures. These range from the SETI style very loose coupling over the internet to closer coupling such as the SMP machines.
The actual speed of the cluster is pretty much irrelevant, I can build a SETI style parallel computer using off the shelf hardware for less than $1000 per processor. But that only allows me to handle problems that can be broken down into lots of independent sub-problems trivial parallelism.
What CRAY appear to be doing is building a machine that has closer coupling between the processors. There are certainly problems for which this approach is the solution. I doubt that the number of such problems is commercially viable however. The problem is that many of the traditional super computer problems are now dealt with using loosely coupled clusters. 100MHz ethernet is probably adequate for many problems. Other traditiona; 'supercomputer' problems are now attacked with desktop servers. I remember doing work with astrophysicists who used to wait for time on expensive mainframes, these days a cheap Linux box meets their needs.
Even 'defense' (read corporate welfare dept) applications are no longer automaticaly super computer class. Sure they may do a lot of processing, but these days it is likely to be optimization type work which in turn tends to break down into a series of independent simulation runs.
Re:What's it good for if your friends don't have o (Score:1)
I have, I have built them, bought them, designed them.
In many cases you are right, supercomputers are bought not on the technical merits but on the 'cool to have' corporate ego trip budget.
Twelve months ago there would no doubt have been a queue of dot coms eager to find something slightly less mindless than a superbowl ad to spend money on, today I think not.
Supercomputers are like Formula one racing cars, expensive to buy, expensive to run and can only be used to advantage by a very small number of people.
These days I would guess that 95% of the people who can use a supercomputer to advantage well enough to justify the sticker price are working in much more profitable and stock option rich fields.
Re:Imagine... (Score:1)
"Can you imagine what Taco Bell would have given out for free if one of THESE had landed on its target?"
Help me out here, peeps.
What about the "Cray SX-5 Series"? :-) (Score:5)
Okay, I'm a NEC/HNSX Supercomputers employee, on the verge of becoming a Cray employee (because of the agreement they signed), but I'm not speaking for anyone else but me here, of course. :-)
I don't know why people bother with such a news. Sun's gonna provide the I/O processor for a not-so-high-end supercomputer. And?
A few weeks ago, there was a real bombshell: Cray would drop the anti-dumping legal action, re-opening the US market to japanese supercomputers. Cray will even become the sole reseller of the NEC SX Series in North America!
If you go take a look at www.cray.com [cray.com], you'll see that this agreement with Sun occupies a single line in their news listing, while the NEC agreement is a big framed box that occupies about half of my screen here.
For some time now, american supercomputer customers were petitioning to get japanese machines, because it been a long time the american machines had been up to any good. Instead, we hear about the SV2, which will barely surpass the few years old SX-5 processing power, with less memory throughput than the SX-5.
I won't deal with the "no need for big clunky vector supercomputers, we have clusters". I believe a whole lot into clusters, but they're freakin' hard to program, and some things just won't be as fast (hey, the SX-5 CPU has a 256 bytes wide memory path! that's not bits, that's bytes! what can you do with your puny gigabit ethernet cluster interconnections?).
Look at these bandwidth benchmark scores [virginia.edu]. The closest thing to a cluster, the Origin machines, are literally crushed to bits by the SX-5. And they're doing twice as good than the SV1.
As for using old big iron machines for stuff like fridges and so on, there was a cool thing at one of our customer site, at the University of Stuttgart: a Cray coffee table. :-)
Nothing beats talking about supercomputer technology while drinking some orange juice on top of a Cray machine. NOTHING.
--
Huh? Linux machines? (Score:1)
Re:What's it good for if your friends don't have o (Score:2)
Re:Where do their priorities lie? (Score:1)
Re:battle begins (Score:2)
Re:another battle... begins (Score:2)
Vaporware? Or Oversoldware? I would rather oversell stock than have shit lying around, as for EV6's why in the world would I want a shitty compaq if I were going to buy a supercomputer? Shit for that I'd save for ASCII White Jr. or something, I still wouldn't go with DEC/Compaq. Ahhh......... IBM SP Power 3 Winterhawk II my kind puting power
Re:Actually Seymour would be disappointed... (Score:1)
My point was exactly that these new machine are not vector supercomputers, they are basically just SMP clusters... because they are built from off the shelf parts like SPARC processors that can't be supercomputers in the style which Seymour Cray envisioned.
They're fast machines taking advantages of all sorts of innovations in parallelization and clustering, but "Crays" in the classic sense of a machine developed as a best of breed, highly engineered supercomputer they ain't.
Re:What about the "Cray SX-5 Series"? :-) (Score:2)
As to your coffee table...I love it! A guy I know has an old SuperServer that he converted into an end table. Oh, and in one of the old Cray buildings in WI, they use an old Cray-XMP as a waiting room couch. Thats a nice touch.
Re:Actually Seymour would be disappointed... (Score:1)
Remember back 5+ years ago, there was a Cray Research (that built the T3-E) and a Cray Computer (that was still building vector machines - the last of which was never bought by anyone [Cray-4])
If you want to see where Seymour's influence still reigns - check out SRC Computers [srccomp.com]. Hint: SRC doesn't stand for "source" - it's Seymour's initials.
Re:Actually Seymour would be disappointed... (Score:2)
The Last thing CA needs (Score:1)
Good news using new super computer we have found a cure for a cancer. Bad News: 3 nuclear power plants over loaded, spilling Tons of Radioactive dust.
Weight lifting the biggest waste of Iron since Big Iron
Re:Actually Seymour would be disappointed... (Score:1)
And yes, this (the one in the article) is the same Cray that built all the vector machines. Check out the gallery [cray.com] if you don't believe me.
Re:Doesn't hold a candle. (Score:2)
AMD also has the Duron, which competes with the Celeron (I don't think it is doing well in the market, I think due to motherbord prices). I think as far as x86 compatables go the only thing AMD is missing is a multi-CPU extra-costly CPU like the Xenon. As far as I know the normal AMD can run in a very large machine (same bus protocal as the EV6, which run in 40+ CPU machines). It is lacking motherbord chipsets that do it. The Xenon still has more cache. Oh, I almost forgot about notebook CPUs, AMD doesn't yet have a great one on the market, only announced.
The other Intel CPUs (i960, Xscale, and so on) arn't what everyone is talking about, and they don't make nearly as much money.
As for x86s, it looks like the top o' the heap is really really close again. I think Intel is going to have the lead again for a while, but they may lose it when the next round of AMDs are out (not the next shrink, but the next real design), or they may not. I may change my mind on that over the next few months, but if I had to bet today, I would have to bet on Intel. Of corse I like AMD more, but...
So go hang out in comp.arch.
Does that really mater in a desktop? My AMD box puts out less heat then my Sun 4/110 (not that I power the 4/110 up much). The only thing I care about in a desktop (other then how fast it is) is fan noise. My Intel and AMD both use to make the same amount of fan noise, but the Intel blew it's power supply, and the one I picked up locally is really quite too noisy.
In laptops Intel has a fast CPU, and AMD just doesn't. However Intel's puts out a crapload of heat. So it has a insanely loud fan (in my Viao). For laptops I'm kinda partial to Apple's G3 and G4. They even ship a Unix that isn't too sucky :-)
'Fraid not (Score:2)
"Hi, my name's Stuart, and I'm a s-supercomputer user.
"I first started useing them quite recently, just six months ago. I got offered a small amount of computer time, for free. It's always the way of these things, let you use these pointless things, get you hooked.
"Anyway, I though I might just try it. You know, the first time can't hurt. Besides, I thought I w-would be able to give it up, any time I wanted.
"All I wanted to do was to get the progam to run that little bit faster. It was calculating the ground state energy of an ordered perovskite. A big job, to be sure - it took nearly 200 MB to hold the wavefunction.
"I packedged up all the code I'd got to that date. Burnt it off to a CD, just in case, you know.
"The Supercomputer sprinted through the code in thirty minutes. I just wasn't prepared for that. It was a feeling I hadn't felt before. Normally, on the RS/6000's we have ourselves, it took a few days, so I was literealy gobsmaked.
"Well, one thing lead to another, and we were offered money if we could calculated a disordered perovskite, with lithium interstitials. I didn't even think of using our own computers, my thought turned straight to the supercomputer.
"On reflection, I can see now that I wasn't thinking clearly. After all, just because it took nearly a gigabyte of disk space to hold the wave function there was no reason why an ordinary computer could have done that. With some swapping, as it needs to hold three of them in memory at once.
"And, of course, every thing the code did was a vector operation. That confused me, because I should have seen that a scaler processor was more efficent at doing vector calculations that a processor designed to do them, but that's the supercomputer addiction kicking in.
"It took a few days on the supercomputer. That's when I realised what had happened, and took my chance to return to normality.
"Fortunatly, with support, we managed to leave the supercomputer behind, and get a 24 node Beowulf of Pentium-III's.
"Just to show how unnessecary these so called 'super' computers are, the Beowulf is now running the code. We're a little concerned that they haven't produced any resulsts after three days, due to constantly swapping, but that's jsut after effects of the supercomputer. After all, the natural state of a computer is disk tharshing.
"The way the processors now take thirty times longer to do a single vecotr operation is a lot of comfort to me. I can see the light now."
--
bit of nostalgia (Score:1)
--
Smithsonian (Score:3)
Its processing speeds, of around 150 million floating point operations per second, were far above anything else that the time of its announcement in 1976. Those speeds are now matched by inexpensive workstations that fit on a person's desk.
Re:bit of nostalgia (Score:1)
Re:Where do their priorities lie? (Score:1)
But these companies don't want to abandon their existing customers either, so they are stuck maintaining their proprietary systems. Some companies like IBM are contributing large quantities of their proprietary material to open source in order to further drop R&D, but others like Cray don't see the value in this.
Don't think for a moment that these companies are committed to Linux as a concept. They are committed to the almighty dollar. However, Linux is a means to this end and it will serve them well.
Re:Where do their priorities lie? (Score:1)
A boon for altivec? (Score:3)
Re:What's it good for if your friends don't have o (Score:2)
As for the scheduling, at least on Irix, and I assume Unicos/mk, the scheduling/memory management is good enough that you can run multiple jobs. The way it works on Irix is that you can dedicate a certain set of CPU's and memory to a job and other sets to other jobs. That gives the job dedicated access to only the amount of hardware that it "needs" (or the programmer thinks it needs, anyway). I know similar stuff exists for the Cray T3E, but I'm not farmiliar with the details.
Re:What about the "Cray SX-5 Series"? :-) (Score:1)
Sounds like protectionism come corporate wealfare to me. Cray corp fails commercially then gets to blackmail their competitors into cutting them a distribution agreement to let them back into the protected market.
Like Cray wasn't subsidised itself. The NSA and Los Alamos bankrolled them from the Cray1 through at least the YMP.
Re:Huh? Linux machines? (Score:1)
Re:Smithsonian (Score:1)
Turned out that he had one already.
Re:Questions about this Beowulf thing... (Score:2)
And yeah, I forgot to include the MPI/shmem library and compiler in my list of what they would probably port.
As for Unicos/mk, the real difference there was the ability to have no difference at the user level between it and other Unices. It is able to present 1800 different kernels to the user as if a single OS were running. No small feat in OS development :)
Re:Actually Seymour would be disappointed... (Score:1)
The distinctive aspect of Crays work was not the style of parallelism, it was the choice of gate technology. Seymour liked ECL and gallium arsenide.
If he were arround today he would be building MIMD machines, there are structural limits to SIMD that means the maximum number of vector processors you can keep running is about 16 and there are limits to backplane technology that cause the number of nodes in a shared backplane that can be kept busy tends to top out at about 16.
What would be different about Seymour's MIMD is that he would dunk it in a vast of cryonic coolant allowing the clock speed to be boosted by about three to four times.
He was trying similar tricks with his GaAs Cray-3 which the company that bears his name was too scared to build. He wouldn't be messing around with GaAs these days however, silicon processing gives better speed these days.
Re:What's it good for if your friends don't have o (Score:1)
Yes a market for hundreds of machines being fought over by five plus large companies.
People still buy supercomputers even when they are obsolete. When DEC released the alpha they gave me one for evaluation. It outperformed the main IBM mainframe on the site by about 30% and cost less than 1% of the IBM annual maintenance.
There are a few applications for which a supercomputer is the answer. However my experience is that they tend to be a political purchase rather than a technically necessary one.
When I was at CERN the experimental physicists demanded time on the Cray since as CERN is an experimental lab they should get the best tools. GEANT, the piece of absolute crap they used at the time to simulate experiments runs no faster on a vector machine, the problem simply does not work well on the architecture.
Now there were plenty of theoretical physicists who had code that would work well. So guess what the CERN management did? They allowed a five man team to spend four years rewriting GEANT for the CRAY. The project would have taken longer but the machine was decomissioned first.
The World Wide Web was invented largely to circumvent the idiotic dictates of the incompetent CERN Network division management, although Tim will never admit that in public (nor will I for that matter without a nym :-)
Re:What's it good for if your friends don't have o (Score:2)
Also, I could be reading you wrong, but you seem to be implying that supercomputer == vector supercomputer. I am including NUMA style machines like the Cray T3E and large SGI Origin machines as supercomputers. Since your example ap wasn't vector, I'll assume it was something along the lines of MPI, which would run quite well on those architectures.
Re:Questions about this Beowulf thing... (Score:2)
Also, I don't know how many large T3E customers you've talked to, but I've talked to several (7 or 8) and almost all of them cited the single image as one of the reasons they love the machine, specifically as opposed to the IBM SP2. Besides, even if what you say is true, I suspect that the chunks are larger than 4 PE's per partition, which is what you'll get (effectively) in a cluster.
Finally, while you may not miss being able to do a "ps" and see all the processes on the machine, you will probably miss the longer latency and lower bandwidth and lack of shared memory in a cluster, well designed or not....
Re:What about the "Cray SX-5 Series"? :-) (Score:1)
Re:A boon for altivec? (Score:1)
You should consider Itanium, which will do up to 8 DP instructions/clock cycle.
At 800Mhz, it will peak at 6.4 GFlops DP.
This is one of the reasons Intel bought Kuck & Associates, to provide vectorization capability to their own compilers.
Alive? Is he dead? (Score:2)
Is there a history of Seymour Cray somewhere?
Rockets and elephants (Score:2)
Re:Alive? Is he dead? (Score:2)
What about IBM? Another class of supercomputers? (Score:1)
IBM has at least half the entries here. And the all important #1. Cray shows up at #10... after 5 IBM installations.
What's up? Is someone claiming Cray belongs to some special set of supercomputing? Would they care to elaborate? Or am I just being overzealous....
Re:Rockets and elephants (Score:1)
for I/O and not calculations, the MTA processor is
what's interesting.
Lucky you _had_ two... (Score:1)
KSR: The only useful supercomputer (Score:2)
They had some increadibly cool features like the fact that the system ran a UNIX variant (OSF/1), not on a front-end like the Crays, but right on the box itself.
They also had a wild memory architecture where there was no main memory. Instead, each processor had a 32MB cache, and the system virtualized the caches into a giant virtual memory space (giant for the time, now 2GB main memory is average to low for serious computing).
Process migration and the scheduler were a thing of beauty. When a process needed to be moved to a new processor, only the stack pointer and registers needed to be moved. When the process was ready to run again, a simple (seeming) page-fault would take care of everything else, moving it's stack and any other memory pages that it needed locally.
They even solved the performance problems of a ring-based bus, and got better performance than most flat busses. One of these suckers with 1024 nodes was a marvel to behold, and alas, there will never be another.
Re:What's it good for if your friends don't have o (Score:1)
Its been a while since I followed that market closely. My main point was that Seymour would not be building vector boxes, that architecture has passed its sell by date.
The reservation I have about NUMA is that the entire hardware design is built arround support for code written for single processors in languages that have a lot of baggage.
The number of times I have seen a physicist demand supercomputer time for a program code running bubble sort... I have frequently been able to tune code to run faster on a PC than the physicists could get it to run on a CRAY.
Point is that the whole approach of writing analysis code in FORTRAN and running it on parallel boxes is obsolete. What is needed is an operating system and language for manipulating mathematics directly. Something that combines the spreadsheet with Mathematica in a way that moves beyond the constraints of Visicalc and clones.
The imperative languages the physicists use make recovery of parallism a very hard job. A declarative approach to defining the problem would save a lot of time, avoid a lot of errors and parallelize better.
As Tony Hoare said "Physicists used to repeat each other's experiments, now they share each other's code".
My conclusion is that the physicists don't deserve fancy iron, they don't care to learn how to use it, they are simply using it as an alternative to thought and an ego boost.
Re:What's it good for if your friends don't have o (Score:2)
ASCI Red was on top because Intel threw so many processors at it. LINPAC is not really all that representative of customer code. If you can tell me (and have data to back it up) that Sandia's code ran as well on ASCI Red as it would on ASCI Blue Mountain or a Cray T3E, I would be very surprised.
Re:What's it good for if your friends don't have o (Score:2)
As for the physics stuff being obsolete, I'd say that it's not. Now this isn't because I don't think there's something better we could be doing, but because there isn't yet....
Re:What's it good for if your friends don't have o (Score:2)
Re:What's it good for if your friends don't have o (Score:2)
Also, you keep trotting out examples of things that are in the category of "embarassingly parallel". Rendering video and chip design are two examples of that. Weather forcasting, oil exploration, particle physics (read nuclear bomb simulation), and protein folding (among many other things) are *not*. They require communication. This is why Pixar doesn't have a supercomputer and why Los Alamos National Labs does. If you don't understand why latency affects *dramatically* the speed of an MPI program, go read up on the subject and get back to me.
Finally, you don't seem to have much knowledge of networking. You don't just "add 100x the processors" and expect that the network is going to scale. That takes careful planning and hardware that you don't buy at CompUSA. I think you will find that building a really good cluster, while perhaps cheaper than a supercomputer, is a lot closer to that pricerange than what you would expect.
As for why Intel got out of the "supercomputer" business, I suspect they got out because they had no product for which there was a compelling reason to buy one. The Paragon was, for all practical purposes, a cluster (and from what I hear, not all that fantastic of one, but I may have a biased view on that). Plenty of people sell clusters.
Re:Alive? Is he dead? (Score:2)
Oh well...are there any other mavericks out there in supercomputer design?