Cray SV1 Named Best Supercomputer for 2001 171
zoombat writes "The BBC reported that the Cray SV1 product line won the Readers' Choice Award for Best Supercomputer for 2001 by the readers of Scientific Computing & Instrumentation magazine. These beasts have some pretty remarkable stats, including a 300 Mhz CPU clock, up to 192 4.8 GFLOPS CPUs or 1229 1.2 GFLOPS CPUs, and up to a terabyte of memory. And they sure know how to paint 'em real nice. Of course, we all know how "scientific" the Readers' Choice Awards are..."
Nostalgia Alert (Score:1, Interesting)
Re:Three dead and several wounded (Score:1)
"natural right"? What's a "natural right"? How is a right, any right, "natural"?
Go away until you learn some basic philosophy, troll. I would suggest Locke as a good starting place for answering this question.
And, by the way, the whole point of the second amendment was that citizens would be able to protect themselves from a corrupt government--much like happened during the American Revolution. Calling for open and accountable government is admirable--but power corrupts. Period.
Imagine (Score:1, Redundant)
Yeah, but... (Score:3, Funny)
Re:Yeah, but... (Score:1)
Programming Language 2001 Readers' Choice Award (Score:1)
only reason it won... (Score:2)
clusters? (Score:1)
Logical verses Physical. (Score:1)
Networks exist irrespective of the data that flows through them, a cluster is defined by that very data.
Bob-
Re:clusters? (Score:1)
Cluster: A computer consisting of many smaller computers. A cluster acts like ONE computer.
Network: A group of computers connected together for data communication, not necessarily acting like one machine.
Go to Beowulf.org [beowulf.org] for more information.
Real Seattlites use Crays (Score:1)
Now that will stick it to Bill G and help the local economy at the same time
"best", but not most sexy... (Score:3, Interesting)
(Nice big CM5)
http://archive.ncsa.uiuc.edu/Cyberia/MetaComp/Ima
Makes the SGI Origins (see below) look like freakshows:
(128 CPU Origin 2000)
http://gepard.cyf-kr.edu.pl/GRIZZLY/or2.jpg [cyf-kr.edu.pl]
(A cluster of [many] 128 CPU O2K's)
http://www.ccic.gov/pubs/blue00/local_images/blue
(A 256 CPU O3K, a 16 CPU O2K, and some RAIDs)
http://www.cines.fr/images/IRISetMINERVE2.jpg [cines.fr]
Re:"best", but not most sexy... (Score:1)
Wow, a raised floor computer room with outside windows! Whodathunk. I've never seen one. Has anyone else seen a computer room with outside windows? All of the one's I've been stuck in are usually in basements.
Re:"best", but not most sexy... (Score:1)
Re:"best", but not most sexy... (Score:2)
(its in the back - kinda bad picture)
Cray2 [cray.com] - sometimes called "the world's most expensive fish tank".
T90 [cray.com]
Re:"best", but not most sexy... (Score:2)
Most sexy belongs to the Thinking Machines CM-5 "Blinking Machines":
Nah, the blinking lights on the CM-5 are pale imitations of the Intel Paragon - here you could see the dataflow between nodes visualized by the lights. Thinking Machines wanted that, but it became to complicated/costly - so they used a random algorithm instead.
Re:"best", but not most sexy... (Score:1)
No, the "random-and-pleasing" mode was just one mode of operations for the LED's. It was also possible to write code to control the LED's; people wrote banner programs to make messages scroll by on the side of the machine.
But the LED's were really handy for diagnostics. I helped install and debug a 1024-processor CM-5 at Los Alamos. The people that wrote the diagnostics suite made them do various things to the LED's. So you could stand at the end of the machine and watch the LED's. When one or a few LED's behaved differently from those around it, your eye could catch it right away. That and the diagnostics output would lead you to the processor board to swap out.
More Origin 2000 Pics (Score:2)
(Two *big* Origin 2000s)
http://w3.physics.uiuc.edu/~wilkens/Images/NCSA/O
(The neat O2K LCD... too bad O3K doesn't have that)
http://w3.physics.uiuc.edu/~wilkens/Images/NCSA/O
(The O2K "boxes")
http://www.unite.nl/nieuws/algemeen/levering.html [unite.nl]
Re:More Origin 2000 Pics (Score:1)
Re:"best", but not most sexy... (Score:1)
yeah, but no personality (Score:1)
They pretty much looked like all the other big iron in the room. Gone was tht distinctive C-shaped tower. So was the need to hire a plumber to help install the water or freon based cooling system.
Granted, these big guys are impressive, but they've lost that certain "soi de vie" (sp?) that once distingiuished them from the other iron in the room.
Re:yeah, but no personality (Score:2)
Seymour Cray is dead. Dr. Cray was one of those genius-nutcase types, he wanted to build a private tunnel from his home outside Eau Claire, Wisconsin to his cottage on Lake Superior, for one thing. I know for certain that he insisted on at least two things. He believed that if you pay a million dollars are more for something, you should 1) be able to sit on it and 2) have your choice of any color. For that reason, you can get your Cray supercomputer in any color you like, and all the older "C-shaped" models that you refer to had padded seats somewhere on the case.
Re:yeah, but no personality (Score:2)
The Cray 1/2 were rounded because that was the most optimum (distance-wise) way to route all of the wires. Shorter wires means faster clock speeds, and those machines *came* overclocked. The seats were just pads covering the cooling units. The C shaped Cray 2 was supposed to be a straight circle (again, wire lengths) but they couldn't find techs small enough to crawl down in there to route wires and fix stuff. So, the made an opening, and the resulting C shape was *purely* coincidence. Ok, so I don't believe that either, but that's the official story.
I heard this from one of the mechanical guys... one reason that the shapes more "boxy" is because of shipping concerns, doorways, etc. Not many people wait until the machine arrives to construct the server room anymore.
Cool.... (Score:1)
Of course it's a Cray (Score:1)
*RIGGED* (Score:1)
Why 300mhz? (Score:1)
Can you imagine... (Score:5, Funny)
Re:Can you imagine... (Score:1)
SV1 is one huge machine, but there are others (Score:4, Informative)
However, it's not alone. There are some other pretty mighty machines out there. The NEC SX-5 [cray.com] has faster RAM and more powerful vector CPUs than the SV1, but does not scale as large. The SGI Origin 3000 [sgi.com] series is not vector, but rather a of (somewhat) traditional CPU design. It's available with up to 512 CPUs and 1 TB of RAM. Unlike both the SV1 and SX-5, the Origin can be ordered with graphics (which turns it into an Onyx).
Then, there's the upcoming Cray SV2 [cray.com], which will be a combination of massive parallel & vector processing. Up to several thousand CPUs and a staggering RAM thruput of 250 GB/sec per bank!! (The Origin 3000 mentioned above has a total system bandwidth of 716 GB/sec.... but that's the entire machine. The SV2 will have more than that with just three banks of RAM alone).
Some of these machines are single image systems (in the case of the Origin 3000, SX-5 and >33 CPU SV1)... meaning they are one single machine, not a cluster. Most run very specific OSes made just for their hardware, with the possible exception of the Origin. SGI's big Origin and Onyx 3000 machines run IRIX 6.5, the same OS that runs on a $150 e-bay special SGI Indy workstation. Kinda cool. The compilers and math libraries are also heavily tuned and generally come with lots of example code and performance tips. When my university purchased a 96 CPU Origin 2000 a few years ago, SGI included a *box* of binders and CDs from some past performance computing seminars they had held. Our university still holds a support contract for the Origin, and thus we're still getting significant compiler and library updates.
Sort of belittles dual bank PC2600 DDR-SDRAM (2x 2.6 Gigabyte/sec = 5.2 Gigabyte/sec) and Myrinet (1 Gigabit/sec = 125 Megabyte/sec interconnect), doesn't it.
Of course... a 16 node x86 cluster doesn't cost $500K - $50M either...
SV1 and its friend, Origin (Score:2)
Some nitpicks: 512 processors is the "off-the-shelf" limit for the Origin series, but I know of special installations with as many as 2048. And there are probably some differences in the Irix kernel for the workstations and for supercomputers. I don't know the specifics, but possibly the two configurations of Irix are "the same" in much the same sense that Linux and Hurd are.
Speaking of Linux, we will soon see Origin systems with Itanium chips in place of MIPS. (They may not be called "Origin", but most of the architecture will be the same.) Since it makes no sense to port the Irix kernel to the Itanium, these boxes will run Linux. Which is why SGI is open-sourcing XFS and other products associated with IRIX.
Re:SV1 and its friend, Origin (Score:2)
As for system size, the 512p limit is real. With only one exception so far (NASA Ames), the largest O3000 you can get is 512p. There's a special mode that you can run in where you sacrifice half the memory capability per node to get twice as many nodes and hence a 1024p system, which is what NASA has. There is a press release on that someplace at NASA Ames and SGI but I forget where. The "special" 2048 is actually a pseudo shared memory cluster, probably using an interconnect similar to (but a lot faster than) Myrinet or using something like HIPPI. This is actually what Blue Mountain is.
As for the Linux boxes, I worked with some prototype hardware based on the Origin 3000 series "chipset" with Itaniums. It was pretty cool stuff (I was working on porting the system partitioning software from Irix to Linux). We have also run an Origin 2000 version of Linux/MIPS on a 128p system.
Re:SV1 and its friend, Origin (Score:2)
Now that I think about it, my assertions about differences between IRIX on an Indy and IRIX on massively-parallel systems were pretty bogus. There used to be different versions of IRIX for different platforms, but nowadays SGI emphasizes "modular upgradeability". And IRIX is still basically a 32-bit system. There is a 64-bit IRIX initiative, but the deadline for that is usually given as 2038 [slashdot.org] ;)
Re:SV1 and its friend, Origin (Score:2)
As for Irix being 32 bit, that is 100% false. Irix for Indy/R4000 Indigo and O2 is 32 bit. Everything else has a full 64 bit kernel and can run 64 bit binaries. Heck, we couldn't even address all the memory on all the nodes in an Origin without 64 bits. Just because the time is stored in a 32 bit value does *not* mean it's a 32 bit OS.
Oh yeah? (Score:2, Funny)
Sorry, Cray. I'm not buying.
Re:Oh yeah? (Score:1)
Darn it, now I'm going to have to go try this out and see...
Beowulf? (Score:1)
Excuse me, but haven't they considered Beowulf clusters [beowulf.org]? I think they are better in both scalability and price. Even some clusters managed to rank among 100 fastest computers.
Re:Beowulf? (Score:1)
Re:Beowulf? (Score:3, Funny)
Real computers are designed in Chippewa Falls, Wisconsin. Real computers have high-speed interleaved main memory, and lots of it. Cache is for losers who can't afford a real memory system.
Re:Beowulf? (Score:2)
Re:Beowulf? (Score:1)
Trying to start my own branch of the Lienie Lodge here in Kansas City.
-Freed
Re:Beowulf? (Score:1)
Re:Beowulf? (Score:2, Interesting)
Re:Beowulf? (Score:2)
Re:Beowulf? (Score:2)
Furthermore, the larger configurations are a sort of super beowulf cluster!
No. (Score:2, Informative)
Re:No. (Score:2, Informative)
I will. Crays are vector supercomputers, which is something entirely different from your garden-variety Intel or RISC chip. There are several different types of computer you need to consider in the sort of comparison you're making:
- Vector supercomputers. This includes Cray, and some by Fujitsu and Hitachi (perhaps NEC as well, but I think those are MIPS-based).
- Massively parallel shared-memory supercomputers. The IBM SP2 and SGI Origin 2000/3000 come to mind. You take two of these, plug them into eachother, and get one computer twice the size with (I think) virtually no loss of bandwidth. I'm pretty sure these can also be connected just for high-bandwidth communications, but the real advantage is in shared memory. Cray makes these too, and SGI's MPPs are largely based on Cray technology (hence the "CrayLink" on Origins).
- Distributed computers. Beowulf is just a set of patches (primarily to Linux) to make distributed-memory programming easier (e.g. utilizing multiple ethernet cards for higher bandwidth). You still have to write programs specially to take advantage of the machine.
The difference lies primarily in programming techniques. You can not run a simple multithreaded program that would saturate an SP2 or Origin on a Beowulf cluster. You'd have to re-write it with PVM or something. PVM is not difficult, but it's not transparent. Some Fortran 90 compilers will do automatic parallelization, but not for a distributed-memory system.
Basically, there's a hell of a lot more difference between a Cray and a Beowulf cluster than between the Beowulf cluster and the SETI@home network.
-Nat
( disclaimer- I am not a supercomputer programmer, but a lot of the people I work with are. I do know something about parallel code, however. )
Re:No. (Score:1)
Re:No. (Score:1)
Re:No. (Score:1, Interesting)
If, however, the sub-tasks have to communicate with each other the bandwidth becomes critical and clustering over a network won't scale anymore.
Cray represents another approach to the problem. It has an absolutely amazing bandwidth and can deal with the hard problems that can't be parallelized over a network.
So, clustering Crays wouldn't help you at all.
Re:Beowulf? (Score:2)
what does scalability mean ?
iirc, the MASPAR MPPs were 16384 Motorola 68k's.
Thats scalable - if you mean "lots of cpus".
or what about some of the ASCI computers ? 8192 cpus, 6144 cpus, etc etc. No beowulf that big, eh ?
What is it that you really mean by beowulf ? Or is it just "the buzzword" that everyone loves and this time (for the first time in 234092384234 slashdot articles) it happens to be slightly relevant ?
The idea of shared-nothing commodity clusters isn't new, and linux isn't the only place its done , much less beowulf. Infact, Cornell ditched some SP/2 boxes to build a cluster--but they used Win2k-- and apparently they love it. You can buy such a compute cluster from Dell just like theirs if you want it.
No, i dont think the issue here was "we've never heard of beowulf" or "well, we are against beowulf because we're snobs". Maybe, just maybe, they had criteria other than "must sound like 'eowulf' when they made a decision ?
Re:Beowulf? (Score:2)
Actually, in a lot of supercomputing fields, the decision is heavily based on "it must run Cray Fortran compiler in optimal fashion". There are simply huge amounts of Fortran code, much of which was written and optimized 20 years ago by brilliant graduate students who have taken maybe a single CS course, that would have to be rewritten moving to any other platform.
Rewriting all this code for a different system would make the Y2K update of all "that Cobol code where the source listing had been obsoleted because they'd modified the binaries because compilation took to long" seem like a walk in the park
(Cray may supply F90, but I'd bet Cray's spend most of their time running amazingly optimized F55 code
I know nothing of such high end hardware, but.... (Score:1)
Re:I know nothing of such high end hardware, but.. (Score:3, Informative)
That's a lot of GFLOPS :-), and a LOT of Ram.
Im not an expert in CPU's but i've picked up a few things that maybe helps you.
There are several ways of doing a CPU fast. You can (the very popular way) increase the clock frequency, thus doing more operations per second. One hertz equals one "cpu instruction" (sometimes they takes more then one, depending on what kind they are). This is the popular way to make a CPU sellable, unexperienced PC buyers sometimes simply focuses on "How many MHZ does this harddrive has ?" :-)
The second way is closely connected to this, simply make more then one instruction per each clock frequency. This is working in parallell, a more complicated solution that helps in some types of operations, but not others. Some problems are not good for parallelizing.
A CPU has something called a branch, [some have more then one, ie parallell processing] you can compare it to an assembly line in a modern factory. More pipes = parallell computing. For some reason, a short pipe [fewer operations until done] gives faster execution but lower clock frequencys, maybe because of heat or something. Could anyone fill me in here ? Anyhow, a cpu like the G4 [motorola/apple] has a rather short pipe, 4 or 5 steps. The P4 [intel] has a rather long one, 20 or so. This is why a P4 doesnt reach the same MHZ as the P4, but still can compete in raw computing power.
You can also increase performance in a CPU by making special instruction sets the programmer can call, and then optimize those instruction sets. The Pentium++ for example, is a rather simple processor wrapped among a huge amount of addon instruction sets, like MMX, SSE, SSE2 (and many many more) etc. The wrapper hardware-compiles these advanced CPU-calls into the basic instructions the core CPU actually can understand.
Hope I clearified somethings, and if I missed something or got something wrong, please correct me :-)
Re:I know nothing of such high end hardware, but.. (Score:3, Informative)
There are several ways to improve speed. The direction Intel went with their chips (and many other vendors as well) is pipelining. Pipelining is when you take that fixed number of transistors and break it into groups based on when they do their work. A 2-stage pipeline is one where the instruction logic is separated into two steps. A 3-stage pipeline is three steps, and so on. A sequence of four instructions in a 3-stage pipeline executes like this:
1) The instruction is loaded and the first stage is executed in one clock cycle
2) The next instruction is loaded and it is executed in the first stage while the the first instruction is executed in the second stage (one clock cycle)
3) The third instruction executes in the first stage, the second instruction executes in the second stage, and the first instruction executes in the third stage (one clock cycle)
4) The fourth instruction executes in the first stage, the third instruction executes in the second stage, and the second instruction executes in the third stage (one clock cycle)
5) The fourth instruction executes in the second stage and the third instruction executes in the third stage (one clock cycle)
6) The fourth instrction executes in the third stage (one clock cycle)
So, as you can see, once the pipeline is filled, one instruction completes every clock cycle, but each instruction takes three cycles to complete. Neat trick, eh? There are a lot of hairy details to take care of between stages, and pipelined processors can get very complicated very fast, particularly if you're trying to implement an instruction set that wasn't designed for pipelined architechture (i.e. x86 instruction set).
Cray went a different way. A Cray process is uses vector instructions to process a lot of data in one instruction. Compare this to the pipeline where multiple instructions are in progess during any single clock cycle. A vector processor, on the other hand, has large sets of registers which are referenced as a vector and has instructions that can fill an entire vector from a particular chunk of memory, add two vectors and store the results in a third, multiply, divide, negate, whatever, a vector at a time. And then of course there is an instruction to store the contents of a vector into a particular chunk of memory.
Pipelining has the marketing advantage that if you make your pipeline long enough (the Pentium 4 is a 20-stage pipeline) then the stages take less time to execute and you can bump up the clock speed.
Vector architechture does not have this marketing advantage, but they are historically superior for certain applications and data sets (like weather modeling meteorological data).
Re:I know nothing of such high end hardware, but.. (Score:1)
Ye olde 8086 is much like the cannonical 1 cycle = 1 instruction CPU that you described. Since the minimum number of trasistors needed to execute an instruction is pretty much fixed (but occaisionally somebody somewhere figures out a way to reduce the number by a few), and the amount of time it takes for the signals to pass through a sequence of transistors is basically fixed (although better materials and smaller transistors can improve this), a 1 cycle = 1 instruction really just isn't capable of running at a high clock speed (Mhz).
8086 is *far* from being clock cycle per instruction design. The fastest instructions in it take 3 cycles (like NOP or register to register ADD). Instructions with complex effective address calculations take even longer. For example MOV (MOV = load/store instruction in x86 'architecture') immediate (immediate = the data is supplied in the instruction) to memory with base + index + displacement addressing takes massive *22* clock cycles. For comparison, in more modern architectures (anything since 486), it often takes just 1 or 2 effective clock cycles in ideal conditions.
Re:I know nothing of such high end hardware, but.. (Score:2, Informative)
Each stage in the pipeline lets the hardware work on the instruction a bit, to setup register access and whatnot. Quite a few of the steps in modern x86 processors are 'unwrapping' the CISC instruction and turning it into RISC. (This is a bit simplified). The more steps there are, the shorter (less time) each step can be, letting the clock rate go up. Fewer steps means (generally) that each step needs more time, therefor limiting clock speed.
Long pipelines have one drawback, though. Assume there's one instruction currently being executed. The next one, in memory, will be in the stage that's one back. The next instruction after that will be in the stage before THAT, and so on. This works most of the time, where you have many sequential steps in a row. However, if there's a branch, the pipeline has to be flushed; it'll take at least as many clockcyles as there are stages in the pipeline before any instructions start getting actually executed; there's a lag time there while the instructions are making there way from the start to the end of the pipeline. There may/will be overhead on top of that which can make the stall time greater than if there was no pipeline at all.
So, back to yer original question, a high-MHZ deep-pipelined chip can be slower than a lower-MHZ shallow-pipelined chip IF there are a lot of branches in the program, because each branch will require a pipeline flush, which takes a lot of time to recover from. Speculative branching helps out a lot here, but it's not 100percent accurate, and also requires a lot of silicon to deal with.
All the extra real estate on the chip dedicated to the logic for deep pipelines could be, instead, dedicated to speeding up operations or extra cache or whatever. But x86 chips need fargin' deep pipelines these days to get high MHZ numbers, or else each complicated CISC instruction would take a year or so to decode.
Re:I know nothing of such high end hardware, but.. (Score:1)
Your reply implies the following towards the end but wasn't clear. Pipelines aren't automatically flushed as you first imply. A CPU has to decide which fork to take when it loads instructions after the branch is read into the pipeline. Only if the code takes the branch that's not already in the pipeline does the CPU discard the pipeline's contents.
CPU speed is not relevant anymore! (Score:1, Interesting)
This has been common knowledge in the world of supercomputing for decades. In a multiprocessor architecture the speed of an individual processor is not that important. What's important is that the processors can efficiently access the memory, mass storage and can rapidly communicate with the other processors.
If I were buying a new computer now I'd opt for a dual processor setup (possibly two 650 MHz P-III CPUs or something else in the same MHz range) over a single, blazingly fast CPU that chokes on the sluggish memory bus.
Re:CPU speed is not relevant anymore! (Score:1, Informative)
Re:I know nothing of such high end hardware, but.. (Score:2)
In an ordinary PC, you can use one CPU clocked really fast, but you're limited by the speed of the I/O bus and memory bus. This is where cache comes in, as small amounts of data and code can be held in extremely fast memory "close" to the CPU.
In a supercomputer like this, you use lots of slower processors, which aren't necessarily limited by bandwidth, but can individually get enough work done.
Imagine, if you will, 35 people in Edinburgh, who need to get to Glasgow, some 50 miles away.
Would it be quicker to transport them in a 160mph Porsche Boxster, one at a time, or take them in 5 Volvo estates?
Re:I know nothing of such high end hardware, but.. (Score:5, Informative)
i was told in a CS course that the arch of the cray vector units is basically the same as the cray 1... the speeds have changed, the process has changed, the external peices have gotten much faster.. but at the core, the cray vector machines are very fast at the following type of thing:
given a vector of a given length
do foo to every element in that vector
_very_ efficiently
to see how this operates a bit better, consider how a normal cpu might do the following
for i = 1 to 64
begin
blah[i] = blah[i] + 1
end
that would end up getting compiled perhaps into something like this on a traditional cpu:
loop:
load blah[i]
increment blah[i]
save blah[i]
increment i
if i 64, goto loop
what we're seeing is that for 1 element, we do a load, an ALU op, a store, an ALU op, and a conditional branch.
conditional branches fuck cpus. badly. having load stores inside inner loops, fucks cpus badly.
to see why, you need to understand pipelining, but basically i'll make it short and easy: the instruction cache of a cpu is always stuffing the pipeline with its "guess" of what instructions should be... and its not until several of those 1.4ghz clock cycles later that you even know if you've got the right instruction... if you do, great.. if you dont, you're fucked and you flush the pipeline and start over.
conditional branches fuck this all to hell because without optimization, you've got a 50% chance of filling your pipeline with the wrong instructions.. so on a p4 with a 20+ stage pipeline you're talking about throwing away some sizable portion of those instructions... and then refilling them... now, branch predition realy helps this a lot, but conditional branches are just one problem... the load/store units of cpus also typically introduce huge pipeline delays... i.e. you need to load blah[i] but that takes 2 or 3 cycles (even from cache!! dont even think about it if you need to go to main memory) so any instructions which use blah[i] must be scheduled at least 2-3 clock cycles aftewrads...
so without keen optimization and ideal software loads, suddenly your 1.4ghz chip is stalling 2-3 instructions all the time.. and its only running like a 400mhz proc
so, to make traditional cpus fast, pipelineing and multiple EUs have been added. these have drawbacks (and i'velisted some of pipelinings above).
the "vector" approach is totally different. you actually have "vector" registers, and "vector instructions". the machine actually sets up "virtual" pipelines for you. so on a vector machine, the scenario above would be more like:
vectorsize=64
xv = xv + 1
(assuming xv is the vector register with your 64 elements in it)
what the cray hardware does is hooks up the peices of its cpu in a virtual pipeline that does something like this:
foreach element of vx
load
inc
save
notice that the foreach construct looks like a loop, but its not realy, its pipelined, so what actually gets sent through looks like this
load i
inc i, load i+ 1
save i, inc i + 1, load i + 2
save i+1, inc i + 2, load i + 3
save i + 2, inc i + 3, load i + 4
save i + 3, inc i + 4, load i + 5
etc etc etc
except for fill and drain, the load, inc, and save hardware units are always perfectly utilized. there is no branching or conditional logic involved.
the example i've chosen is very trivial, and may be subject to huge factual or conceptual mistakes
there are lots of interesting problems that the cray did _not_ handle well.. but for what its worth, the vector processors in the cray 1 aren't significantly different in operation and instruction set than the SV1 of today.. by many measures, cray "got it right" originally. the SV1 of today might use a normal BGA packaging on a CMOS based process, (the cray1 used discrete ECL logic and point to point wiring - all strung together by little old minnesotan women)
also the original cray 1 ran at either 100 or 80mhz, could take 32mb of ram.... i.e. for the 1970s it was faster than any desktop workstation until the mid 90s...
note that the top500 list crays are usually the T3Es.. which are a totally different beast than the vector processor.. a T3E is just a bunch of alpha CPUs on a very fast interconnect.. sort of like a "custom cluster in a box".
Re:I know nothing of such high end hardware, but.. (Score:1)
Vector units are extraordinarily fast at certain tasks. I work with a custom DSP that uses a vector processor to do FIR filtering, and the amount of processing it does is mind blowing. We clock it at somewhere between 80-120 MHz (depending on application), and at the top end of that range it gets nearly a billion ops per second.
Now, this does come with some drawbacks. First of all, it requires a tremendous amount of silicon to do properly, making development extremely expensive. Not to mention, that with all that logic running simultaneously, power consumption can become an issue as well. Secondly, it is a royal ain in the ass to program (or write a compiler for). When you have 8 operations per instruction word, making efficient use of that processing power involves writing some ugly, ugly code.
Tim
Re:I know nothing of such high end hardware, but.. (Score:1)
Re:I know nothing of such high end hardware, but.. (Score:1)
Re:I know nothing of such high end hardware, but.. (Score:1)
300 Megahertz... (Score:1)
Re:300 Megahertz... (Score:1)
8MB are good (Score:1)
But I find it frustrating to see this overclock'd circuits unleashed just for science. It may make a decent and nice Quake server though
Re:8MB are good (Score:2)
was it 8MB or 8 Mword? I seem to recall crays using some non-standard wordsize.
While I'm at it, here's another:
How fast were those 160 MFlops; I suspect that sustained throughput would play a big part in it. Is that about as fast -- in real world speed, not peak tight loop speed -- as today's desktops, or have we finally caught up to that?
Re:8MB are good (Score:1)
So, an 8megawork cray is equivalent to a 64megabyte PC (memory wise that is), except it really has 80 megs.
Gratuitous MS Bash... (Score:2, Funny)
I have to wait almost all day for it.
Sara's TERAS: 1024 cpu SGI Origin 3800... (Score:1)
(And nopes it's not listed in top500 yet
For more closeup pictures see: http://unfix.org/news/sara/ [unfix.org]
Ain't it sweeeeeeeeeeet?
500 Fastest Computers In The World (Score:5, Interesting)
Visit here [top500.org] to view 500 fastest computers in the world as of June 2001. Cray is actually number 11. IBM ASCI White SP Power 3 is the king.
It's interesting to note that a beowulf cluster is also there (#42)
Number 6 was built in 1998! (Score:2)
A lot has happined since then (just think, in 1998 the fastest x86 CPU was the Pentium II at 450 MHz). If you look further down the list, the next oldest machine is a Cray at number 35. Very cool that Blue Mountain is still a pretty impressive performer over three years later (an eternity by computer terms).
Re:500 Fastest Computers In The World (Score:1)
Re:500 Fastest Computers In The World (Score:1)
Re:500 Fastest Computers In The World (Score:1)
Customer: I want the big red one on page 42
Cray Salesperson: Cool choice! We'll start delivering it next week at noon...
Other machines may be faster, but they're as rare as hens teeth.
Re:500 Fastest Computers In The World (Score:2)
A related site, which I find a bit more interesting, is the clusters database [top500.org]. Particularly noteworthy are three PC clusters that cross the teraflops line (peak performance, mind you, but still impressive).
Re:500 Fastest Computers In The World (Score:2, Insightful)
Re:500 Fastest Computers In The World (Score:2)
Re:500 Fastest Computers In The World (Score:1)
Simulating nuclear weapons also falls under "energy research". And it also most certainly takes that kind of computing power. Just thought you should know.
Re:500 Fastest Computers In The World (Score:1)
Well... I figured they weren't trying to solve CA's energy crisis.
Re:500 Fastest Computers In The World (Score:1)
-Chris
Re:500 Fastest Computers In The World Re: Cray T3E (Score:1)
r
Criteria used: (Score:3, Funny)
A clustered J90 ... (Score:1)
Anyway, now that Cray has been purchased by Tera (the guys who developed that highly threaded CPU) it will be interesting to see their technical direction. In terms of processor development, theirs is the only vaguely interesting CPU that has reached the semi-commercialisation stage.
LL
Re:A clustered J90 ... (Score:2)
Now keep in mind that the J90/SV1 is Cray's "budget" line...The SV2 (due out next year) is supposed to be a successor to both the T90 AND the T3E (its both vector and Mass Parallel).
I'm curious to see what happens with the Tera multithreading systems as well. The first few years I imagine they will just be bought as computing research machines. (so that people can see what they do)
Re:MHz speed comparisons are not fair (Score:1)
x86 vs R10K/R12K in the Real World (Score:1)
We run Octanes with Dual R10K 300 Mhz CPUs and 1 GB RAM running IRIX 6.5.5M. These boxes cost us $35 - $40 K EACH.
Last year, we began testing Dell Precision 420MT workstations with Dual 866 - 933 Mhz PIII CPUs, 1 GB RAM running RedHat 6.1 out of the box with no kernel optimizations, older version of gcc and glibc. These boxes cost about $7000 each.
For our purposes (voice recognition models), the Dell systems out perform the Octanes by 25% at 1/5 the cost.
The other thing that kills us is the $70K annual support contract to SGI.
Guess what? We're selling the Octanes and going to install even faster rack mounted x86 compute servers that cost even less than the Dell workstations.
BTW, does anyone have information on how many MFLOPs current x86 hardware is capable of?
Re:x86 vs R10K/R12K in the Real World (Score:1)
SGIs are good for very large, graphics-intensive "simulations" (e.g. modelling) because of high internal bandwidth. And those dual MIPS processors work together much better than your Pentium IIIs do. But for the tasks we do, x86 wastes pretty much anything.
It's a shame, becuase PC hardware and OSes are such shit. Even Linux- a far superior OS for our purposes than anything else- pales in comparison to IRIX. I suppose since the entire world is moving towards Intel chips, it's a good thing we've got Linux. If I ever get told to do Windows development I'll probably quit and go to med school or something.
-Nat
Re:MHz speed comparisons are not fair (Score:1)
Sigh...it was a joke.
I probably shouldn't have bothered.
Tim
PS. Those <sarcasm> tags are looking pretty good about now... :-)
Re:MHz speed comparisons are not fair (Score:1)
Re:MHz speed comparisons are not fair (Score:2)
Assuming you mean distributed.net, you are incorrect.
MIPS processors do not implement the bitrotate instruction in hardware that x86 does, that RC5 cracking relies so heavily on. We benchmarked a 4 processor Origin 2000 with MIPS chips running at 300Mhz and it came out around a celeron in keyrate even using all 4 processors.
So, while your point is correct, using distributed.net as an example with MIPS processors is not a good idea.
Re:MHz speed comparisons are not fair (Score:2)
No, but... (Score:1)
Re:Linux cost comparison (Score:1, Insightful)
Re:Linux cost comparison (Score:1, Flamebait)
We all know its true. But you just couldnt beat it into some peoples skulls even if it was affixed to the end of your cluebat with a wad of gum (which incidentally appears to be what is holding linux together).
Hey moderator: -2, i dare you!
Re:Linux cost comparison (Score:1)
EXT2 doesn't lose data like a firehose spray water. It's good enough for alot of people. And most people avoid at all cost the sort of total power failure that will be bad for EXT2.
The reason why linux is getting support from so many people and companies is because, unlike mr mundie's suggestion, of the GPL.
Look at FreeBSD or other "free" BSDs. Why don't IBM and others support this great OS? Because it's not under the GPL. This is easy to understand actually if you take a step back and think.
Take exmple "JSF" for linux or "XFS" both are open source projects. JSF is actually under the GPL. Why would ibm do that? Well really for ibm and most other companies they just don't make enough from selling their own UNIX. Open source makes it possible for IBM and SGI and others to take advantage of the resources of their competitors. Werid eh? IBM works on the linux kernel and so does SGI and both gets something out of it but neither has to pay the full cost of developing the kernel.
But the real strength as so many have pointed out is that the GPL specifically makes it practical for companies like IBM to share it's technology. They can have no fear that people will use their technology aganist them by offering it as their own. Sure they can use it in their own OS and charge for the OS but the OS will have to be open source. Or else they will still have to pay for developers.
What is also very good for computer firms is the the development of most Linux projects is a very open proccess. With the likes of FreeBSD you have to work your way into the organization before you can contribute to the core code. Sure you can spin off your own. But again you don't get the community support and you have to foot the whole bill yourself.
At the bottom of it all marketing is very important. Linux markets well. It has media attention. It attracts the University students in CS program who will come out to the work in the real world some day. It supports flashy multimedia... in otherwords linux is more impersonating. It's the like the geeky guy who is nice and friendly, very helpful but quiet who works hard but can't get a date the contrasting with the fashy guy who's not so nice, loud and parties a lot but gets all the girls. It's the world what can i say?
Re:Linux cost comparison (Score:1)
So don't go complaining. go ahead and use OSX or if you are really the clueless asshole that you are use 9.1 till the ends of day and suffer the problems of a outdated OS.