Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Hardware

Update From Cray World 108

rchatterjee writes "Cray, the only mainstream recognizeable name in supercomputing, has been busy lately. Their totally new MTA-2 supercomputer design will use a UltraSPARC-III powered Sun Fire 6800 server to just feed the data to the MTA-2's processor. They're also refocussing on Vector Supercomputers and are going to release their first new vector supercomputer since Tera Computing bought them, the SV-2 in 2002. And if that wasn't enough they have a deal with API networks to develop Alpha processor based Beowulf clusters of Linux machines that as a cluster will run the same operating system as Cray's T3E supercomputers. Seymour Cray would be proud. You can get a quick overview of all the latest Cray developments from this article on Cnet."
This discussion has been archived. No new comments can be posted.

Update From Cray World

Comments Filter:
  • by MattGWU ( 86623 ) on Thursday March 29, 2001 @01:33PM (#329195)
    ....A Beowulf Cluster Of Those.

    Oh wait, nevermind
  • When and where can I get me one of these?

    Seriously though, it is great to see that they are finally gearing up some new designs, and I cannot wait to see some of the performance specs on these.

    A computer is not a computer unluss it takes up at least 40 cubic feet

  • To the pentium 4. [lostbrain.com]

    tcd004

  • The Cray team finally take their RC5 attempt seriously...

  • by JoeyLemur ( 10451 ) on Thursday March 29, 2001 @01:40PM (#329199) Homepage
    I'd hardly say that Seymour Cray would be proud. My feeling is that Mr. Cray was about innovations and pushing the limits of what computers can do. Today's Cray is doing little more, in my view, than simply rehashing the same old technologies with higher clock speeds and more CPUs thrown at the problem.

    If Cray was alive today, I like to think that he'd be directing research into quantum computers, and maybe technologies like Starbridge Systems is working on.
  • by zpengo ( 99887 ) on Thursday March 29, 2001 @01:42PM (#329200) Homepage
    The first person to mention Quake loses a testicle.
  • All I want to know is does it run the counter-strike server...
  • With recent advances in clustering technology, ALL supercomputers have become obsolete.

    Don't get me wrong here, Cray's putting out some remarkable new hardware, but there's no point in spending millions of dollars on a machine that will be as powerful as an average desktop in five years time.

    Not only that, but the IDEA itself of supercomputing has become obsolete within the past few years. With recent advances in distributed processing and Beowulf clustering, anyone with a bunch of old 486's laying around can combine their power to process more data than a Cray could ever dream of.

    SETI@home is a perfect example of MASSIVE amounts of processing being done by many small, inexpensive computers working in unison.

    Google [google.com] also uses clustered computers to provide the horsepower behind their search engine. In fact, it is believed that Google operates the largest Linux cluster in the world. Many of their computers are literally junkers. They've got dozens of 286, 386, and old Sparc's working together to provide an EXTREMELY powerful search engine.

  • But whom can you connect your supercomputer to? Chances are, it's the only one of its kind on the block. You don't see the local knitting guild or mall-walkers' club investing in a supercomputer. You see them investing in a robust NT-server/WinME-client setup.

    This must be a troll....


  • ...I could get the rights to www.cray.com, then I would be happy.

    A good reason not to have a large powerful computer company name in your alias. Good luck getting a domain. :)

    Cray
  • I know a certin federal agencie (NSA) which will be putting these cray computers on their wishlist!
  • We should rethink how we're choosing supercomputers, imho.

    We? What are you talking about? Who here has even had any input into the purchasing of a super-computer? If it wasn't for your relatively tame posting history, I'd say you were a troll...

    People buy a super-computer for one purpose - raw computing power. Not optimized for interconnectivity, not to conform to standards, not to run Linux or other system of choice, not to play Quake (although I'd like to see the benchmarks).

    You buy it not because it's a good deal, but because you have research bucks to burn, and want the best money can buy. If you are even asking about cost or a down-the-road upgrade to a different platform, you are probably not looking for a super-computer.

    It's like saying, maybe the Air Force should give up on specially designed fighter planes, and see if it would be a better idea to convert 747s or an Airbus model. Imagine the cost savings in spare parts!

  • Uhh... yes... it's a troll i think... Ceres
  • Supercomputer stalwart Cray announced a deal to use new Sun Microsystems servers in a design the company will begin selling later this year.
    Wow this is actually even better news for Sun in my opinion than it would be for Cray. See with Cray's decision to go with UltraIII's, it means that apparently Sun has the firepower to deliver (this I already knew) which could mean a showdown for Sun's newest lines of servers which are slated to give HP and others runs for their money [theregister.co.uk]

    nude chix [antioffline.com]
  • by VSarkiss ( 173815 ) on Thursday March 29, 2001 @01:55PM (#329209)
    Actually, Cray was working on quantum computers and biological computers way back in 1996. Read this fascinating interview [si.edu] from that year.

    I agree with you that Cray was not only about pushing the limits of technology--he was working on the Gallium Arsenide Cray-5 at the time of this death--but also about innovation in computer architecture.

    A great example is the CDC 6600, his first parallel computer for Control Data Corporation. It had many innovations that only later came into popular use. It was a parallel processor, essentially a pipelined machine. It had a pure register load/store architecture, with a hardwired zero register, similar to many future RISC designs. There are many more, but I gotta run....

  • This must be a troll....

    A good one, though - only a couple of red flags - he could have left out the Microsoft stuff and still got me.

    It might even be the dreaded double-irony troll - what is the point of a bunch of Slashdotters commenting on developments in Cray super-computers? Inside knowledge? "Maybe it will run Linux?" "They ran Win2000, and now it is as fast as a 386! LOL!"

    As I said, truly an effective troll - no one has much to comment on, but he throws in a catalyst to make people comment, anyway.

  • I believe that this quote:

    not to play Quake

    Taken in conjunction with this comment [slashdot.org] entitles you to some elective surgery.

    You've still got one, though.

    --
  • by dhms ( 3552 ) on Thursday March 29, 2001 @02:00PM (#329212)
    Cray's whole creative enterprise centered around making a machine that was hand-tooled to be the fasted machine that one could build with the best engineering one could invent.

    The original Crays (and CDC-6x00 machines) were crafted to take advantage of the efficiencies of the speed of light in their busses and memory configurations (i.e., cable lengths cut to sub-millimeter tolerances). These new "crays" are crays in name only... they lack the creative zeal that made "real" crays the exciting machines that they were and that launched the supercomputing industry...

    These are just glorified SMP machines...
    Don't get me wrong, clusters are great (hey, I even wrote a book on them!), the CPU speeds we have now are beyond my wildest dreams, but there will never be another Cray...

  • Imagine a beo... Oh, screw it.
  • by Dhrakar ( 32366 ) on Thursday March 29, 2001 @02:01PM (#329214)
    Sorry, but I disagree strongly with you that cray is just 'rehashing old technologies'. As a primarily Cray shop (we have a T3E-900 and, after this weekend's upgrade, have the first SV1e) we are quite involved with the Cray User's Group (CUG) meetings and such where Cray's plans for the future (SV2 and beyond) are discussed.
    There are _many_ things going on behind the scenes at Cray that show that Cray is once again trying to push the supercomputing envelope as far as they can. One way to look at the SV2 is as a T3E with large vector units in each CPU (no e-registers) and a nearly flat (shared) memory space across all processors. Thus, no need for mixed mode (MPI and OpenMP) programming like on IBM SP-like architectures.
  • Sorry, big guy. Unfortunately it doesn't work that way.

    The distributed/clustered model is great for some problems - lots of completely independent data processing (like SETI, or Google) works great. You start moving into a realm where you're doing scientific simulations where all the calculations are interdependent, then a Beowulf cluster (even with some good interconnect) hold a scalability candle to, oh, say, a Cray T3E. There's a good reason that the Cray T3E and SV1 won the "Co-Supercomputer product of the year award" this year, as handed out by the people who use them.

  • Please don't. Anyway, you were pretty much on the same level as those, but put in less effort. At least tell me you wet your pants when you confirmed that you got it.
  • But there's No Such Agency...
  • I believe that this quote:

    not to play Quake

    Taken in conjunction with this comment [slashdot.org] entitles you to some elective surgery.

    Funny. Someone else beat me to the first Quake post, but I may be the first non-anonymous guy. And I lost one just last week for an AYBABTU reference...

    Now, where's the AYCABTU reference?

  • If I didn't have a supercomputer in my home, I certainly wouldn't admit that I didn't. Furthermore I definitely wouldn't question the purchase of such a tremendous platform for Conway's life. Its ok if you aren't worried about being cool, but you're taking it too far to scoff at a purchase that every American takes very seriously.
  • Sure it should..."Imagine A Beowulf Cluster Of These" is a time honored tradition, dating to back when cavemen would stack rocks on top of eachother to simulate a larger, more powerful rock without the size and espense of a "Super Rock". Other things that can be made into beowulf clusters include herring, iMacs, the DMCA (how much would that suck), Natalie Portman, CueCats, the iPac palmtop, the CounterStrike mod to HalfLife, and, of course, other Beowulf Clusters.

    Maybe if you posted less as an A/C you'd get moderator someday and you could do something about it, untill then, enjoy the Ozzy and shut your mouth.
  • You don't see the local knitting guild or mall-walkers' club investing in a supercomputer. You see them investing in a robust NT-server/WinME-client setup.

    Ah, yes, the knitting guild. You young whippersnapper wouldn't remember back in the day, when every knitting guild worth its salt had at least a mainframe, and the big 'uns had supercomputer or two.

    Makes me feel young just thinking about it.

    Nowadays, uh-course, the knitting guilds are all run by youngsters, with their newfangled WinNT clusters and WinMe clients.


    Man, we gotta get a rating of "-1 WTF?".
  • ...You can get a quick overview of all the latest Cray developments...

    Exactly how many overviews per second can I get?

  • by Durinia ( 72612 ) on Thursday March 29, 2001 @02:14PM (#329223)
    Yes, the Alpha Clusters are just that - clusters. There is a lot of creativity going on with their new products, however. The SV2 is a completely new architecture with pretty-much everything being custom designed. It combines the benefits of the vector architecture with the scalability of the huge parallel T3E-style machines. And the MTA-2 is an all-new (commercially) architecture - so new that the first people that will probably purchase them will be researchers trying to figure out how to best use them.

    So, yeah, Cray is branching into some lower markets, but they've finally escaped the creative dearth (at least on the high-end) that is SGI.

  • This seems to be the most interesting part about this article.

    If I get all this right, the T3E operating system enables a programmer to be seen somehow more as a single computer (with less (no?) load balancing to do in the program itself and such). If I do get this right, and it's really done to a certain point, this would be really fascinating.

    Does anyone know more about, for example, how T3E works (preferrably in simple terms :), or if there's already some open-source project going on to do something similar?
  • Alpha processor based Beowulf clusters of Linux machines ...

    I'm imagining, but, alas, it doesn't take as much imagination as Beowulf clusters of everyting else which has been posted on /. Some how the magik seems to have gone out of this one...

    --

  • by Anonymous Coward
    Not quite... Quake runs on it, but the frame rate exceeds the refresh rate of a normal monitor. They have a new Cray monitor that can handle the refresh rate, but they haven't worked the bugs out. When you get attacked with a chain gun, it tends to microwave your head.
  • Heh. But you forgot bouillabaisse. -1 incomplete
  • I thought "choice was good." I also believe that there is such a thing as "the right tool for the right job."

    It would be suicide to simply say to a customer "Hey, I realize that you may have written some buisness apps (more than a few mission critical I'm sure) for those AIX and OS/390 boxen, but if you haven't heard we're all gung ho on Linux now. We've stopped supporting / maintaining / updating all of those other OS's because they're not '733T enough. Port all of your software to Linux, you 'lusers.'"

    If you don't have anything nice to say, say it often.

  • If Cray was alive today, I like to think that he'd be directing research into quantum computers, and maybe technologies like Starbridge Systems is working on.

    If Cray were alive today he'd be clawing the inside of his coffin.

  • the T3E operating system enables a programmer to be seen somehow more as a single computer

    Sh*t. Again typing to fast. Before anybody complains... I wanted to say: the T3E operating system enables a programmer to see the Beowulf cluster somehow more as a single computer

    And even then I don't know if this is correct English... ;-)
  • by Alien54 ( 180860 )
    While, ILM will certainly keep going with the low end cluster technology, you can bet that Lucas will want one or two of the high end product just to be able to render the high end special effects that he is so fond off. If the US Navy is going to buy one at 5.2 million us dollars, then you can be sure that this would be a worthy investment for some one like Lucas.

    I can see the programmers and designers drooling now.

    Check out the Vinny the Vampire [clik.to] comic strip

  • hey clueboy. The Sun is just a front end. It could have been almost anything, but Sun no doubt cut them a better deal than any other vendors, to get excellent marketing.

    Either way you cut the cake Sun still got in. What would you rather have had AMD, Intel make chips for a Cray? Give me a break. They could have gotten a better deal on chips from SGI who sold them Cray from the get go and thats a given, so what makes you think they didn't do specs on it as well?

    "See? UltraIIIs are fast! You can stop making fun of how slow our chips are now! UltraIIIs are fast enough to be associated with (lots of hand waving) CRAY'S NEW SUPERCOMPUTER!" -- Sun marketing.

    Stick to whatever your day job is. When it comes to purchasing equipment (which I have many of times) I would go for what I felt was best, based on benchmarks, industry usage, etc., not some marketing bullshit so give it a rest.

    You sound upset that Sun has captured this segment of business from Cray, I'm happy for them as I am for most companies that do well. As for Cray its a bit outrageous unless your a government or major fortune500, but I do use, own, and plan on purchasing more Sun hardware including a SunBlade workstation for home use, regardless of what people think. Sure their expensive, but well worth it, and Alpha's just couldn't cut it all the time so what other options are there? Wait... Maybe I should sit back and wait for you to make a chip, hows that?
  • sure! run the Tru64 version on top of AlphaLinux....
  • Here's a nickel, buy yourself a clue.

    There is a big difference between a vector supercomputer and some random collection of microcomputers. It's bandwidth and the ability to efficiently handle large datasets. See the Stream Benchmark [virginia.edu].

  • The first person to mention Quake loses a testicle

    Actually, I wouldn't be too suprised if an Athlon or PIII could beat this machine at Quake frame rate (that is, if you could install a videocard on it...). This kind of computer is very, very fast at vector computations, but very slow for scalar... plus the fact a HUGE memory bandwidth, the latency is relativly poor.
  • Cray has posted several job listings on Mojolin (http://mojolin.com [mojolin.com]) looking for software engineers familiar with linux. Sounds pretty cool to me.
  • The entire idea of supercomputing is obsolete?!?!?!?!?! Someone better tell that to the hundreds of places still buying supercomputers and SGI, IBM, NEC, Fujitsu, Cray, and several other companies who all make them. As people have pointed out, there are many reasons for buying a real supercomputer rather than a bunch of 286's, which I'm sure Google doesn't really use (Pentiums I would believe):

    Management: Which would you rather manage - 1024 seperate PC's, each with their own boot disk, hostname, power supply, etc. or a Cray T3E with a single system image, and one boot disk. Think about the time it would take to do an OS upgrade on the cluster.

    Bandwidth: Stuff like Myrinet and Quadrix (sp?) is quite good but it still doesn't come near the bandwidth that you can get on a traditional supercomputer. Google and SETI@home are *horrible* examples of real scientific code because they do almost no internode communication. We can get 1.6 gigabytes/second full duplex between nodes on our Origin 3000 product. T3E gets even more than that.

    Latency: The time it takes to get from node A to node B matters *a lot* with real code. Again, SETI and Google don't care if it take 100 microseconds instead of 4 to exchange data. When you are exchanging lots of data and synchronizing with many other nodes, this matters. Many massively parallel jobs spend large percentages (like 25%) of their time doing communication. A lot of this is very small messages.

    Quality: Usually, you get better components when you buy a supercomputer than a PC. Does this matter for you? Probably not. If you are trying to predict where a tornado is going to touch down, you're going to be a lot more interested in whether the machine is running.

    Ease of coding: It is a lot easier to use a model of coding called OpenMP, which relies heavily on shared memory between threads, than MPI in which you have to explicitely call for communication between threads to happen. OpenMP runs best on large SSI supercomputers.

    Now don't get me wrong - there are many applications for which a cluster is sufficient. This doesn't mean there is no room for supercomputers. Besides, if you look at the direction Quadrix, Myrinet, and the new Infiniband stuff is going, they are going to end up looking a lot like a shared memory supercomputer....

  • Heh, yeah, the Suns can perform, but isn't it possible that the lone specification for the I/O cabinet was that it Isn't made by SGI?
  • All your beowulf are belong to us?
  • Well, if you call making a next-gen vector machine "rehashing old technologies", then they guy you replied to is probably right. But, the MTA-2 is unlike anything out there, and the SV2 looks very little like a SV1/e/ex. SV2 is designed to be a follow-on to the T3E with it's massively parallel setup, and a follow on to the T90/SV1 series with its vector processing. Also, SV2 has a single system memory image, unlike the T3E. Cray is taking vectors to a whole new performance/organization level.
  • This is not exactly a new thing for Cray. They have used Sun's for *years* as front ends. The SWS (system workstation) for stuff back at least as far as the Y-MP was a Sun 4/3xx rebadged as a Cray and running SunOS. The SWS for more recent Crays are up to SPARC 5's. The Ultra Enterprise 10000 was actually designed by Cray (well, by a company Cray bought called SuperServer, I believe) and sold to Sun either right before or right after the merger with SGI (boy were we ever stupid to give that to Sun!) So this is not like Sun just jumped into a market that they never had before.
  • Or you could look at their employment page here: http://www.cray.com/company/employment/openings/ea gan/index.html [cray.com]
  • They don't have any 286s - no MMU, Linux can't run.
  • So, you mean to say that gigabit ethernet isn't as fast as custom-designed router interconnects? *cough*. T3Es have extremely high bandwidth with low latency. Ethernet has high bandwidth with high latency. Which would your massively parallel codes (the ones that can't be broken up into smaller chunks SETI-style) prefer?
  • by Boone^ ( 151057 ) on Thursday March 29, 2001 @03:05PM (#329245)
    People don't buy vector supercomputers so they can run MySQL and Apache. They're for scientific programming. Think of scientific programming as Weather analysis, car crash tests, airflow analysis, etc. There are people and companies who use computers for a lot more things than you've apparantly realized.

    Also, would it shock you to let you know that Cray machines have TCP/IP stacks, and ethernet ports, and all that? They don't have video cards, so you have to connect to them somehow...

  • I'm not sure what exactly they meant by "the cluster will run the T3E operating system even though each node will run Linux". That is a contradiction. The T3E ran something called Unicos/mk which was a full blown Unix-type OS. In any case, what they probably really meant was that they will port the load balancing, gang scheduling (scheduling a job onto a group of processors) and some sysadmin tools. They will probably also port some of the process accounting tools. I suppose they could port Unicos/mk's kernel servers to Linux instead of the Chorus microkernel which is what it runs on T3E, but that would take a *lot* of work and may violate some of the agreements signed with SGI before the spin-off. I also doubt they have the resources to do this and continue to develop the OS's for the MTA and the SV2 (two different OS's) plus maintain Unicos (for the SV1 and previous vector systems) and Unicos/mk (for the T3E).
  • It's too good to pass up -- Can you imagine a Beowulf cluster of these Beowulf Cluster Crays?

    Anyway, a professor in grad school seven years ago told me Cray had all but died, cancelling their latest processor, because the fastest RAM they had back then would still cause their processor to idle for 350 cycles just to load one value.

    Intelligent compilers at best can only use that wasted time performing maybe 10 instructions ahead, not the 350+ required of a superscalar vector computer. Hence, the processor was basically useless, starved of information. Project cancelled. Cray on Skid Row.

    I presume they have licked this RAM-to-processor problem?

  • I used to be active in the parallel processor field. I helped build some iron that analyzes 6Tb/sec for a particle physics experiment.

    On the supercomputer front the game has changed radically since the original Crays. The Cray 1 was a SIMD machine, one instruction stream controlling multiple processors. That architecture works well for a limited number of problems, the problem is that most problems turn out to have bits of vector code interspersed with decision code. If you have 10% of your code that cannot be parallelized then even an infinite number of processors can only go 10 times faster than a single processor.

    The attraction of vector boxes was that there was no need to recode the FORTRAN application, the compiler could detect the parts of the code that could be parallelized and optimize the code. The problem is that there are limits to what the automatic parallelization can do.

    The upshot is that there tends to be little advantage in more than 8 or 16 processors in a vector box. Meanwhile a standard Pentium IV has multiple independent processing pipelines - I forget the number (4 maybe). So the gap between the cray box and the mainstream may not be amazing.

    At this stage most of the problems in science can be attacked using MIMD architectures. These range from the SETI style very loose coupling over the internet to closer coupling such as the SMP machines.

    The actual speed of the cluster is pretty much irrelevant, I can build a SETI style parallel computer using off the shelf hardware for less than $1000 per processor. But that only allows me to handle problems that can be broken down into lots of independent sub-problems trivial parallelism.

    What CRAY appear to be doing is building a machine that has closer coupling between the processors. There are certainly problems for which this approach is the solution. I doubt that the number of such problems is commercially viable however. The problem is that many of the traditional super computer problems are now dealt with using loosely coupled clusters. 100MHz ethernet is probably adequate for many problems. Other traditiona; 'supercomputer' problems are now attacked with desktop servers. I remember doing work with astrophysicists who used to wait for time on expensive mainframes, these days a cheap Linux box meets their needs.

    Even 'defense' (read corporate welfare dept) applications are no longer automaticaly super computer class. Sure they may do a lot of processing, but these days it is likely to be optimization type work which in turn tends to break down into a series of independent simulation runs.

  • We? What are you talking about? Who here has even had any input into the purchasing of a super-computer?

    I have, I have built them, bought them, designed them.

    In many cases you are right, supercomputers are bought not on the technical merits but on the 'cool to have' corporate ego trip budget.

    Twelve months ago there would no doubt have been a queue of dot coms eager to find something slightly less mindless than a superbowl ad to spend money on, today I think not.

    Supercomputers are like Formula one racing cars, expensive to buy, expensive to run and can only be used to advantage by a very small number of people.

    These days I would guess that 95% of the people who can use a supercomputer to advantage well enough to justify the sticker price are working in much more profitable and stock option rich fields.

  • I'm currently trying to get this started:

    "Can you imagine what Taco Bell would have given out for free if one of THESE had landed on its target?"

    Help me out here, peeps.


  • Okay, I'm a NEC/HNSX Supercomputers employee, on the verge of becoming a Cray employee (because of the agreement they signed), but I'm not speaking for anyone else but me here, of course. :-)

    I don't know why people bother with such a news. Sun's gonna provide the I/O processor for a not-so-high-end supercomputer. And?

    A few weeks ago, there was a real bombshell: Cray would drop the anti-dumping legal action, re-opening the US market to japanese supercomputers. Cray will even become the sole reseller of the NEC SX Series in North America!

    If you go take a look at www.cray.com [cray.com], you'll see that this agreement with Sun occupies a single line in their news listing, while the NEC agreement is a big framed box that occupies about half of my screen here.

    For some time now, american supercomputer customers were petitioning to get japanese machines, because it been a long time the american machines had been up to any good. Instead, we hear about the SV2, which will barely surpass the few years old SX-5 processing power, with less memory throughput than the SX-5.

    I won't deal with the "no need for big clunky vector supercomputers, we have clusters". I believe a whole lot into clusters, but they're freakin' hard to program, and some things just won't be as fast (hey, the SX-5 CPU has a 256 bytes wide memory path! that's not bits, that's bytes! what can you do with your puny gigabit ethernet cluster interconnections?).

    Look at these bandwidth benchmark scores [virginia.edu]. The closest thing to a cluster, the Origin machines, are literally crushed to bits by the SX-5. And they're doing twice as good than the SV1.

    As for using old big iron machines for stuff like fridges and so on, there was a cool thing at one of our customer site, at the University of Stuttgart: a Cray coffee table. :-)

    Nothing beats talking about supercomputer technology while drinking some orange juice on top of a Cray machine. NOTHING.



    --
  • Quoted from the article: "...clusters of Linux machines that as a cluster will run the same operating system as Cray's T3E supercomputers..." Since what makes a machine a Linux machine is the OS it runs, how are these Linux machines if they run Cray's OS?!? -- IANALU (I Am Not A Linux User)
  • whoooops. That should read "COULDN'T hold a candle..." stupid parentheses confused me!

  • A proprietary OS on the main supercomputer is required to make sure every last thing is optimized.
  • Oh yeah - the "spite" factor. Is there a company policy now against using "the font"?
  • At least you can actually get an EV68, as opposed to the vapor ultrasparcIII, which has yields approaching zero and doesn't even come close to the performance of any EV6 (but you knew that since you base your decisions on FACTS like benchmarks).

    Vaporware? Or Oversoldware? I would rather oversell stock than have shit lying around, as for EV6's why in the world would I want a shitty compaq if I were going to buy a supercomputer? Shit for that I'd save for ASCII White Jr. or something, I still wouldn't go with DEC/Compaq. Ahhh......... IBM SP Power 3 Winterhawk II my kind puting power

  • Actually, I am quite qualified, having worked with clusters, crays and other supercomputer class machines for quite a long time ( > 15 years), but since you're an AC we'll ignore your silliness and get to the point...

    My point was exactly that these new machine are not vector supercomputers, they are basically just SMP clusters... because they are built from off the shelf parts like SPARC processors that can't be supercomputers in the style which Seymour Cray envisioned.

    They're fast machines taking advantages of all sorts of innovations in parallelization and clustering, but "Crays" in the classic sense of a machine developed as a best of breed, highly engineered supercomputer they ain't.

  • Um, you forgot to mention that in your STREAM results, that the 8-year old Cray T932 was actually the closest contender, at about 1/2 to 2/3 of the SX-5. The SV2 is much more related to the T90 series as far as performance levels than the SV1 (read: budget line). The memory bandwidth is not the only performance measure either - how about scalability? Interconnect latency? Cray has a pretty good record in those arenas as well (see T3E).

    As to your coffee table...I love it! A guy I know has an old SuperServer that he converted into an end table. Oh, and in one of the old Cray buildings in WI, they use an old Cray-XMP as a waiting room couch. Thats a nice touch.

  • This Cray company doesn't not really have anything to do with Seymour (other than the name). The architecture is more inspired by Tera and SGI than Seymour.

    Remember back 5+ years ago, there was a Cray Research (that built the T3-E) and a Cray Computer (that was still building vector machines - the last of which was never bought by anyone [Cray-4])

    If you want to see where Seymour's influence still reigns - check out SRC Computers [srccomp.com]. Hint: SRC doesn't stand for "source" - it's Seymour's initials.

  • Uh...did you actually read the article? The Sun processors aren't for the Supercomputer - they're to control the I/O. They're the "front-end", so to speak. Everything in the SV2 (except memory) and MTA-2 is completely custom. And the architectures are very VERY novel.
  • This has to be the Last thing California needs is a Faster (More Power hungry) Super Computer. Eating all the power. (Not to say that the release of Pentium® 4s helped)

    Good news using new super computer we have found a cure for a cancer. Bad News: 3 nuclear power plants over loaded, spilling Tons of Radioactive dust.

    Weight lifting the biggest waste of Iron since Big Iron
  • I'd hardly call having 512 Intel chips in a box very "Seymour-like".

    And yes, this (the one in the article) is the same Cray that built all the vector machines. Check out the gallery [cray.com] if you don't believe me.

  • Intel has a different market than AMD has, and I sincerely doubt that will ever change. Intel covers all kinds of CPU's, but AMD only has its Athlon/Thunderbird currently.

    AMD also has the Duron, which competes with the Celeron (I don't think it is doing well in the market, I think due to motherbord prices). I think as far as x86 compatables go the only thing AMD is missing is a multi-CPU extra-costly CPU like the Xenon. As far as I know the normal AMD can run in a very large machine (same bus protocal as the EV6, which run in 40+ CPU machines). It is lacking motherbord chipsets that do it. The Xenon still has more cache. Oh, I almost forgot about notebook CPUs, AMD doesn't yet have a great one on the market, only announced.

    The other Intel CPUs (i960, Xscale, and so on) arn't what everyone is talking about, and they don't make nearly as much money.

    As for x86s, it looks like the top o' the heap is really really close again. I think Intel is going to have the lead again for a while, but they may lose it when the next round of AMDs are out (not the next shrink, but the next real design), or they may not. I may change my mind on that over the next few months, but if I had to bet today, I would have to bet on Intel. Of corse I like AMD more, but...

    Slashdot only seems Intel biased to ignorant fools that don't know the difference.

    So go hang out in comp.arch.

    Oh, and AMD's chips also work well as space heaters.

    Does that really mater in a desktop? My AMD box puts out less heat then my Sun 4/110 (not that I power the 4/110 up much). The only thing I care about in a desktop (other then how fast it is) is fan noise. My Intel and AMD both use to make the same amount of fan noise, but the Intel blew it's power supply, and the one I picked up locally is really quite too noisy.

    In laptops Intel has a fast CPU, and AMD just doesn't. However Intel's puts out a crapload of heat. So it has a insanely loud fan (in my Viao). For laptops I'm kinda partial to Apple's G3 and G4. They even ship a Unix that isn't too sucky :-)

  • "Welcome to Supercomputer users anonymous. I'd like to welcome our first speaker tonight, Stuart."

    "Hi, my name's Stuart, and I'm a s-supercomputer user.

    "I first started useing them quite recently, just six months ago. I got offered a small amount of computer time, for free. It's always the way of these things, let you use these pointless things, get you hooked.

    "Anyway, I though I might just try it. You know, the first time can't hurt. Besides, I thought I w-would be able to give it up, any time I wanted.

    "All I wanted to do was to get the progam to run that little bit faster. It was calculating the ground state energy of an ordered perovskite. A big job, to be sure - it took nearly 200 MB to hold the wavefunction.

    "I packedged up all the code I'd got to that date. Burnt it off to a CD, just in case, you know.

    "The Supercomputer sprinted through the code in thirty minutes. I just wasn't prepared for that. It was a feeling I hadn't felt before. Normally, on the RS/6000's we have ourselves, it took a few days, so I was literealy gobsmaked.

    "Well, one thing lead to another, and we were offered money if we could calculated a disordered perovskite, with lithium interstitials. I didn't even think of using our own computers, my thought turned straight to the supercomputer.

    "On reflection, I can see now that I wasn't thinking clearly. After all, just because it took nearly a gigabyte of disk space to hold the wave function there was no reason why an ordinary computer could have done that. With some swapping, as it needs to hold three of them in memory at once.

    "And, of course, every thing the code did was a vector operation. That confused me, because I should have seen that a scaler processor was more efficent at doing vector calculations that a processor designed to do them, but that's the supercomputer addiction kicking in.

    "It took a few days on the supercomputer. That's when I realised what had happened, and took my chance to return to normality.

    "Fortunatly, with support, we managed to leave the supercomputer behind, and get a 24 node Beowulf of Pentium-III's.

    "Just to show how unnessecary these so called 'super' computers are, the Beowulf is now running the code. We're a little concerned that they haven't produced any resulsts after three days, due to constantly swapping, but that's jsut after effects of the supercomputer. After all, the natural state of a computer is disk tharshing.

    "The way the processors now take thirty times longer to do a single vecotr operation is a lot of comfort to me. I can see the light now."

    --
  • All of this reminds me of the story [ucsf.edu], from the early '80s when Apple bought a Cray to help with their next processor design -- And Cray bought a Mac to help with their next processor design.
    --
  • by wiredog ( 43288 ) on Thursday March 29, 2001 @05:23PM (#329266) Journal
    The Smithsonian's Air & Space Museum has a Cray 1 on display. [nasm.edu] Look at the specs, the cost, and reflect upon Moore's Law.

    Its processing speeds, of around 150 million floating point operations per second, were far above anything else that the time of its announcement in 1976. Those speeds are now matched by inexpensive workstations that fit on a person's desk.

  • Actually, Apple bought the Cray to help with designing the injection molds for the plastic cases. Turns out that those are rather tricky to design. By the time I was at Apple in 1996 the Cray was being used as a storage system that people could do backups to.
  • Well, the economics works this way. Linux == free (as in beer) developers, so it costs less to develop than proprietary operating systems. A company which embraces Linux on the large server end can undersell their competition because they have lower R&D costs.

    But these companies don't want to abandon their existing customers either, so they are stuck maintaining their proprietary systems. Some companies like IBM are contributing large quantities of their proprietary material to open source in order to further drop R&D, but others like Cray don't see the value in this.

    Don't think for a moment that these companies are committed to Linux as a concept. They are committed to the almighty dollar. However, Linux is a means to this end and it will serve them well.

  • Oh and one more thing. For the last few months or so, I have been aware that cray is hiring lots of Linux kernel developers, so I do wonder if they are looking at tackling greater issues like SMP or if they are only looking at drivers, et. al.
  • by Noer ( 85363 ) on Thursday March 29, 2001 @07:09PM (#329270)
    The refocus on vector supercomputing is interesting. I wonder if it might have a side-effect of helping scientists take advantage of the Altivec units on PowerPC G4s. Yes, I know the Altivec can't do double-precision floats, but you don't ALWAYS need that, and companies like GCG in the biotech industry are excited about taking advantage of OS X on G4 hardware for bioinformatics. For tasks that don't need the full power of a Cray, but are nonetheless vectorizable, I hope the cross-pollination of vectorization in algorithm design will benefit everyone.
  • Yeah,special hardware assists for programming models seem to have disappeared. We are planning some for future SGI machines, though, and interconnects like Myrinet have sorta kinda assists for MPI.

    As for the scheduling, at least on Irix, and I assume Unicos/mk, the scheduling/memory management is good enough that you can run multiple jobs. The way it works on Irix is that you can dedicate a certain set of CPU's and memory to a job and other sets to other jobs. That gives the job dedicated access to only the amount of hardware that it "needs" (or the programmer thinks it needs, anyway). I know similar stuff exists for the Cray T3E, but I'm not farmiliar with the details.

  • A few weeks ago, there was a real bombshell: Cray would drop the anti-dumping legal action, re-opening the US market to japanese supercomputers. Cray will even become the sole reseller of the NEC SX Series in North America!

    Sounds like protectionism come corporate wealfare to me. Cray corp fails commercially then gets to blackmail their competitors into cutting them a distribution agreement to let them back into the protected market.

    Like Cray wasn't subsidised itself. The NSA and Los Alamos bankrolled them from the Cray1 through at least the YMP.

  • Each node in the cluster runs Linux and Unicos coordinates the nodes.
  • When MIT finally decomissioned the CM-5 someone had the idea that maybe we could give it to Bill Gates in return for paying for the new building.

    Turned out that he had one already.

  • Well, remember that the MTA is not being worked on by the remnants of Cray Research :) From what I hear, the SV2 OS is coming along quite nicely.

    And yeah, I forgot to include the MPI/shmem library and compiler in my list of what they would probably port.

    As for Unicos/mk, the real difference there was the ability to have no difference at the user level between it and other Unices. It is able to present 1800 different kernels to the user as if a single OS were running. No small feat in OS development :)

  • They're fast machines taking advantages of all sorts of innovations in parallelization and clustering, but "Crays" in the classic sense of a machine developed as a best of breed, highly engineered supercomputer they ain't.

    The distinctive aspect of Crays work was not the style of parallelism, it was the choice of gate technology. Seymour liked ECL and gallium arsenide.

    If he were arround today he would be building MIMD machines, there are structural limits to SIMD that means the maximum number of vector processors you can keep running is about 16 and there are limits to backplane technology that cause the number of nodes in a shared backplane that can be kept busy tends to top out at about 16.

    What would be different about Seymour's MIMD is that he would dunk it in a vast of cryonic coolant allowing the clock speed to be boosted by about three to four times.

    He was trying similar tricks with his GaAs Cray-3 which the company that bears his name was too scared to build. He wouldn't be messing around with GaAs these days however, silicon processing gives better speed these days.

  • The entire idea of supercomputing is obsolete?!?!?!?!?! Someone better tell that to the hundreds of places still buying supercomputers and SGI, IBM, NEC, Fujitsu, Cray,

    Yes a market for hundreds of machines being fought over by five plus large companies.

    People still buy supercomputers even when they are obsolete. When DEC released the alpha they gave me one for evaluation. It outperformed the main IBM mainframe on the site by about 30% and cost less than 1% of the IBM annual maintenance.

    There are a few applications for which a supercomputer is the answer. However my experience is that they tend to be a political purchase rather than a technically necessary one.

    When I was at CERN the experimental physicists demanded time on the Cray since as CERN is an experimental lab they should get the best tools. GEANT, the piece of absolute crap they used at the time to simulate experiments runs no faster on a vector machine, the problem simply does not work well on the architecture.

    Now there were plenty of theoretical physicists who had code that would work well. So guess what the CERN management did? They allowed a five man team to spend four years rewriting GEANT for the CRAY. The project would have taken longer but the machine was decomissioned first.

    The World Wide Web was invented largely to circumvent the idiotic dictates of the incompetent CERN Network division management, although Tim will never admit that in public (nor will I for that matter without a nym :-)

  • I submit that if you look at the direction clusters are going in, they are heading for an MPP supercomputer. What do Myrinet and Quadrix have that gigabit ethernet don't? The ability to do a write from one side to another without going through the OS. I suspect that they are trying to figure out a way to do reads that way also. They are heading for NUMA.

    Also, I could be reading you wrong, but you seem to be implying that supercomputer == vector supercomputer. I am including NUMA style machines like the Cray T3E and large SGI Origin machines as supercomputers. Since your example ap wasn't vector, I'll assume it was something along the lines of MPI, which would run quite well on those architectures.

  • Uh, if you are using a T3E with more than one processor per partition, you are using some form of "single image mode" since the mk kernel runs on only one processor.

    Also, I don't know how many large T3E customers you've talked to, but I've talked to several (7 or 8) and almost all of them cited the single image as one of the reasons they love the machine, specifically as opposed to the IBM SP2. Besides, even if what you say is true, I suspect that the chunks are larger than 4 PE's per partition, which is what you'll get (effectively) in a cluster.

    Finally, while you may not miss being able to do a "ps" and see all the processes on the machine, you will probably miss the longer latency and lower bandwidth and lack of shared memory in a cluster, well designed or not....

  • Actually, they still are [sgi.com]. (Press release from before they were sold).
  • Almost all supercomputer type apps need double or extended precision floating point arithmetic, especially computational chemistry codes.

    You should consider Itanium, which will do up to 8 DP instructions/clock cycle.

    At 800Mhz, it will peak at 6.4 GFlops DP.

    This is one of the reasons Intel bought Kuck & Associates, to provide vectorization capability to their own compilers.
  • When did this happen? The last thing I heard was that he was breaking off from Cray Research to start his own similarly named company to continue building the fastest possible supercomputers. Of course that was at least six years ago.

    Is there a history of Seymour Cray somewhere?

  • I think one of the most interesting aspects of their MTA-2 super comp is the fact they're using USPARC-IIIs which uses commodity SDRAM (though atypical since it runs on a 150Mhz bus). It's nice to see the chip given room to stretch its legs since it is basically languishing inside of Sun. Their server products aren't shipping with the processors yet so there's little in the way of real world benchmarking yet. Hopefully Cray changes that around a bit. I'm also glad to see Cray making better news than sitting as an unused subsidery of SGI.
  • He died in a car accident in 1996 shortly after founding SRC Computers [srccomp.com] (the "similarly named company" you mention; SRC is his initials). The company's page has a brief history [srccomp.com] of his work, though I'm sure there are plenty of more-complete such histories out there.
  • Maybe I'm missing something, but when someone says Cray is the "only mainstream recognizeable name in supercomputing" I think they're seriously out of touch. At least out of touch with the list of Top 500 [top500.org] supercomputing sites.

    IBM has at least half the entries here. And the all important #1. Cray shows up at #10... after 5 IBM installations.

    What's up? Is someone claiming Cray belongs to some special set of supercomputing? Would they care to elaborate? Or am I just being overzealous....

  • Read up on the MTA concept. Sun stuff is only used
    for I/O and not calculations, the MTA processor is
    what's interesting.
  • The only "monolithic" supercomputer I ever had any respect for (as opposed to networks of individual machines like beowulf clusters or DECNets) was the KSR/1, and it only existed for about a year before the company's financial woes killed it.

    They had some increadibly cool features like the fact that the system ran a UNIX variant (OSF/1), not on a front-end like the Crays, but right on the box itself.

    They also had a wild memory architecture where there was no main memory. Instead, each processor had a 32MB cache, and the system virtualized the caches into a giant virtual memory space (giant for the time, now 2GB main memory is average to low for serious computing).

    Process migration and the scheduler were a thing of beauty. When a process needed to be moved to a new processor, only the stack pointer and registers needed to be moved. When the process was ready to run again, a simple (seeming) page-fault would take care of everything else, moving it's stack and any other memory pages that it needed locally.

    They even solved the performance problems of a ring-based bus, and got better performance than most flat busses. One of these suckers with 1024 nodes was a marvel to behold, and alas, there will never be another. :-(
  • Also, I could be reading you wrong, but you seem to be implying that supercomputer == vector supercomputer. I am including NUMA style machines like the Cray T3E and large SGI Origin machines as supercomputers.

    Its been a while since I followed that market closely. My main point was that Seymour would not be building vector boxes, that architecture has passed its sell by date.

    The reservation I have about NUMA is that the entire hardware design is built arround support for code written for single processors in languages that have a lot of baggage.

    The number of times I have seen a physicist demand supercomputer time for a program code running bubble sort... I have frequently been able to tune code to run faster on a PC than the physicists could get it to run on a CRAY.

    Point is that the whole approach of writing analysis code in FORTRAN and running it on parallel boxes is obsolete. What is needed is an operating system and language for manipulating mathematics directly. Something that combines the spreadsheet with Mathematica in a way that moves beyond the constraints of Visicalc and clones.

    The imperative languages the physicists use make recovery of parallism a very hard job. A declarative approach to defining the problem would save a lot of time, avoid a lot of errors and parallelize better.

    As Tony Hoare said "Physicists used to repeat each other's experiments, now they share each other's code".

    My conclusion is that the physicists don't deserve fancy iron, they don't care to learn how to use it, they are simply using it as an alternative to thought and an ego boost.

  • No, *nobody* would rather admin 1024 PC's. They may be *forced* to for cost and or availability reasons, but that doesn't mean they *want* to. I'm not so stupid that I can't think of how to assign hostnames and IP's and as long as everything works, those schemes are fine. It's when stuff breaks that you get in trouble.

    ASCI Red was on top because Intel threw so many processors at it. LINPAC is not really all that representative of customer code. If you can tell me (and have data to back it up) that Sandia's code ran as well on ASCI Red as it would on ASCI Blue Mountain or a Cray T3E, I would be very surprised.

  • No - Seymore would have been building FPGA boxes. For Seymore's last design, see http://www.srccomp.com [srccomp.com].

    As for the physics stuff being obsolete, I'd say that it's not. Now this isn't because I don't think there's something better we could be doing, but because there isn't yet....

  • I still don't see your point. So you had the number one spot on the top 500. That doesn't mean that anyone even *bought* your system (yes, I know Sandia bought Red). All that means is that you built something and ran LINPAC on it. I hardly see how that qualifies your statement that people would rather manage an independant cluster of 9000 systems rather than a T3E.
  • Well that would be because you keep switching points. You started out saying that it is easier to admin 1000 PC's than a Cray T3E, a point which I strongly object to, and finished by saying that not all tasks require a traditional supercomputer, which I agree with and stated quite clearly at the end of the post to which you originally objected.

    Also, you keep trotting out examples of things that are in the category of "embarassingly parallel". Rendering video and chip design are two examples of that. Weather forcasting, oil exploration, particle physics (read nuclear bomb simulation), and protein folding (among many other things) are *not*. They require communication. This is why Pixar doesn't have a supercomputer and why Los Alamos National Labs does. If you don't understand why latency affects *dramatically* the speed of an MPI program, go read up on the subject and get back to me.

    Finally, you don't seem to have much knowledge of networking. You don't just "add 100x the processors" and expect that the network is going to scale. That takes careful planning and hardware that you don't buy at CompUSA. I think you will find that building a really good cluster, while perhaps cheaper than a supercomputer, is a lot closer to that pricerange than what you would expect.

    As for why Intel got out of the "supercomputer" business, I suspect they got out because they had no product for which there was a compelling reason to buy one. The Paragon was, for all practical purposes, a cluster (and from what I hear, not all that fantastic of one, but I may have a biased view on that). Plenty of people sell clusters.

  • That's too bad. I remember first being exposed to the work of Seymour Cray in an article in Omni back in the 80's. It talked about how he was designing his next greatest supercomputer and it was going to be submerged in a very cold liquid.

    Oh well...are there any other mavericks out there in supercomputer design?

"I've seen it. It's rubbish." -- Marvin the Paranoid Android

Working...