Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Hardware

Project Appleseed Updated 139

J. FoxGlov writes "UCLA's Project Appleseed has been updated with new benchmarks showing their clusters of Macintosh G3s and G4s running neck and neck with Crays and kicking the snot out of Pentium II clusters, generating fractal clusters in parallel. Includes the recipe for making your own Apple-flavored Beowulf cluster. "
This discussion has been archived. No new comments can be posted.

Project Appleseed Updated

Comments Filter:
  • by Anonymous Coward
    Silicon Fruit [siliconfruit.com] is one company with PPC --that's dual PPC-- motherboards in the pipeline. Don't ask me how far down the pipeline they are. Could be Wednesday, might be never. Their website has a remarkably familiar look, can't rightly say where I've seen it 'fore, though.
    That Rio Red is a juicy lookin gal telluwhat.
  • by Anonymous Coward
    Just imagine if they made a Beowulf cluster out of THOSE! er, uh...

    What was the article about?
  • by Anonymous Coward
    That rocks! When I was just a wee little tyke, The den mother we had was a real bitch. There was many a time I wanted to see her launched.

    I wonder if I could port one of these to mindstorm and using my Lego harness the new found power to creat my own Mac powered Den Mother Launcher!

  • Forgot one last thing...

    cray does sell the te3-1200e of course (http://www.sgi.com/t3e/tech_info.html), with a 122Gb/sec max bisection bandwidth and up to 2.4TFLOPS peak cpu performance. And up to 2048 (when liquid-cooled) processors.
  • After all, the Appleseed Plan isn't supposed to be announced until 2030..oh, wait...:)
  • I'd like to emphasize the memory effect. Cache performance in SMP architectures can have a huge impact on performance. For larger processor counts in shared memory machines, things like main memory latency, size and location are similarly important. The interconnect has to handle these memory requests too, as well as coordination messages. It probably isn't worth parallelizing on an SMP machine if you can't get good cache behavior.

    Clusters are shared-nothing architectures, great when you don't want to share anything. The SGI, Cray (I know), IBM, et. al, supercomputers are meant for problems that are hard to parallelize. They cost a lot because sharing resources is _hard_ to do well.

    We aren't getting something for free with Beowulf and other clusters. What we're doing is more niche marketing, just like John Katz with his book announcements on /.. Clusters are helping people with a particular kind of parallel problem, and they arrived because appropriate hardware and software has been commoditized. In particular, high speed networking and cheap `nodes' (in this case, whole PCs), as well as Free (in both senses) software.

    -Paul Komarek
  • except that because of the economics of volume production, at least a few weeks ago, every 450 MHz PII I found was about $20 or so more than the 450-PIII at the same shop... if the PII's had been significantly cheaper, I would have gotten that (or a celery...)

    Where are you shopping? I just checked PriceWatch, and PII/450s are around $110-$130 and PIII/450s start at $230.

  • I wonder why they would do this with any Mac OS... and apple talk... wouldn't the scalability be lost with the message passing... and using apples tcp/ip stack isn't the best, either... so I think...

    You think that they would have better performance with a better setup... why cripple it from the begining with the Mac OS?
  • "Step 2: Configuration

    To set up the Macintosh for parallel processing in MacOS 8.1 and higher, one must set the AppleTalk Control Panel to use the appropriate Fast Ethernet Adapter and verify in the chooser that AppleTalk is active. Next, a unique computer name must be set and Program Linking should be enabled in the File Sharing Control Panel. Finally, in the Users and Groups Control Panel, one must allow Guests to link. (Recommended:
    In the Energy Saver Control Panel, set the sleep time to Never (although it is okay to let the monitor go to sleep). This prevents the MacOS from going to sleep while running a Fortran or C program.)"

    I did read the article. I may not be the most versed in Mac OS and AppleTalk, but from what I think I understand is that AppleTalk is good for peer to peer, with only a few peers, after that it can get icky. We use appletalk where I work to talk to some of our older macs. The SGI kashare program on our server is a killer. It might be SGI's implementation, or it could be that AppleTalk was not made for large networks. That is why I don't think that this would be a strongly scalable system. The power of parallel is lost with 8 or 16 boxes. The only time that a strong parallel system can be built is when the number of nodes is over 100 or so. Some of this, though could depend on the application as well.
    I know that in my experience with rendering, having 8 boxes to render on is almost worthless, having 40 is nice. Now adding 8 more boxes... that is even nicer. The power comes from the sheer numbers.
    Also, if the message passing is poor, the power is lost. From articles I read, using multiple boxes on a single render loses the advantage somewhere near 6 to 8 procs. With 8 processors, you are loosing 2 procs worth of power for message passing. I have seen clevar ways of avoiding this issue with special hardware. The other solution is to take a pre-parallel step and break the task up, before it gets to the cluster. With rendering this is easy, with other applications, it isn't as easy. This is why the quick message passing is important and why I was wondering about the choice in OS. Message passing is indeed an important backbone to any parallel system.

  • by rbf ( 2305 )
    Macs? Wouldn't that integrated monitor take up a lot of space? Boy... Think of the deskspace that would take up! No thanks!! I'll stick with a nice rackmount Alpha!!! :-)

    LONG LIVE ALPHA LINUX [alphalinux.org]
  • I asked a question, what part of that is being toll?
  • Plenty clear, but can you get them in a nice sleek rackmount? Can you get a Mac without the MacOS preinstalled? Why not feed the trolls? Are you afraid of paying the toll to cross the bridge?
  • So? Why would anyone want a cluster where EVERY system has a monitor? It just takes up too much space!
  • Just about anything can be crammed into a rackmount case, but can you BUY it that way? Otherwise you end up spending too much time and money; buy the box, buy a new case, move components from old case to new... You get the idea?!?
  • ---
    hehe, "checked out their stock latey?". Like that has anything to do with anything.
    ---

    Sure it does. The clearly inflammatory statement was "It sounds like Apple is not quite giving up yet, although it probably should be", which seems to mean that they should just give up. Their stock price would indicate that they're not even close to going out of business - no need to give up.

    Any time a vaguely Mac-related story is posted, you get these clowns making stupid comments and stereotypes about people they don't even know. If you have a legit gripe with Apple and/or the Mac:

    1. Make sure it's on topic.
    2. Make sure it's informed.

    Usually you find little of either.

    BTW: Your jokes are really funny. Hah hah.

    - Jeff A. Campbell
    - VelociNews (http://www.velocinews.com [velocinews.com])
  • Cray is now a part of sgi. Last I heard, sgi was planning on selling that division off because it hasn't been very profitable. Not to sound like a pessimist, but if sgi continues to have its financial woes and nobody jumps up to buy Cray, Cray might just get axed. It would be ashame to see them fade away, but in this day and age Cray's niche seems to have vanished.
  • CmdtTaco == Commandant Taco?

    No, "Cut Mah Durn Threote Taco", 'cause you can't be called Dibbler in the Confederacy.

  • Just about anything can be crammed into a rackmount case, but can you BUY it that way? Otherwise you end up spending too much time and money; buy the box, buy a new case, move components from old case to new... You get the idea?!?

    Back-of-the-envelope cost analysis:

    Doing it yourself:
    • order(10) boards at order($1000) each: order($10,000).
    • order(10)-board rackmount enclosure at order($1000): order($1000).
    • Value of assembly time: order(10) hours at order($100) per hour: order($1000).


    Total cost is dominated by the cost of the boards you're putting into the rack. Both the cost of the rackmount enclosure *and* the cost of fiddling with the boards to put them into the rack are irrelevant compared to that, even at a very high dollar-per-hour cost for effort.

    Disclaimer: This is a Fermi estimate, not a detailed cost analysis.
  • It is funny you mention that, because back in the 80's there was a point where Steve Jobs proudly proclaimed that Apple Computer Inc. had purchased a Cray to help design the next Apple Computer, to which the CEO of the Cray corporation responded that that was funny, because Cry had purchased an Apple to help design the next Cray. A bit of Historical geek trivia for you all
  • Easy. They wanted to make things easy to set up and use.

    Sure, MacOS might slow things down by a few percentage points, but the cluster takes nearly zero effort to set up, and nearly zero effort to get started on a problem. I realize that a good portion of the readership here pooh-poohs the notion, but some people appreciate being able to hit a couple of buttons and have it all work.
  • Yes, they did use AppleTalk. You're confusing AppleTalk with LocalTalk. LocalTalk is a wiring and very base-level communications protocol, comparable (though much slower than) ethernet. AppleTalk is a high-level transport protocol that can run over many types of connections, such as LocalTalk, Ethernet, PPP, etc. It is comparable to TCP/IP.
  • OR you can have Apple custom-build you a tricked-out G4 with 150G of RAM (no, I'm not kidding)

    Actually, yes you are.

    You're missing a decimal point. You can have Apple custom-build you a tricked-out Gp with 1.5G of RAM (note decimal point). Or you can have Apple build you an otherwise-tricked-out system with 128M of RAM and upgrade it yourself at half the price.

    I love Macs, but blatantly incorrect advocacy doesn't do the platform any good at all.
  • I would say that describes the difference to a tee. Distributed computing would be like writing a book by committee through the US Mail. Everyone on the committee gets assigned a portion of the book, they work on it, then they mail the results back to get combined and get their next assignments. Parallel computing is like having all the authors in the same room where they can yell back and forth at each other.

    Of course, in this system, the first method is likely to be more effective. Book-writing is a case of something that doesn't paralellize effectively at all, and adding nodes can actually decrease performance! :)
  • The machines being used in AppleSeed are all beige G3 towers with the 66mhz motherboards. Not the G4's, nor even the blue G3's. Also, G4's are $1600 and up, not $2000. The older G3's are substatially cheaper.
  • D'artagnan:

    http://www.students.yorku. ca/~kipper/dartagnan/dart.html [yorku.ca]

    Please note that they took a LEISURELY three days to set the whole thing up. Not three 20-hour, CmdrTaco-esque, caffine-powered, loss-of-sleep days -- Three relaxed days in which they DAWDLED over the process. The utter ANTIHESIS of the Linux user-experience!

    The point here isn't about the raw performance of the cluster, although it's fairly respectable performance at that. The point is that anyone can set one of these puppies up, and administering one is a no-brainer. Plug 'n' Play folks! Three steps on a half-sheet of paper versus a 230 page "introduction". Apple's got ease-of-use DOWN! Whole point of the excercise.

    <disclaimer>
    A close reading of the Appleseed G4 benchmarks reveal that the AltiVec processor spends a good deal of it's time just idling away, waiting for other processes to finish. The code is sub-optimal in this respect (it could be faster), but also some of this is due to the nature of the AltiVec instruction set. With some tinkering, it could be improved upon. The code's in FORTRAN fer cryin' out loud! It could stand to have about 10% (the most used routines) hand-optimized in machine code.
    </disclaimer>

    <flame>
    I'm preparing to target my home-built ICBM on the next lamer who complains that Macs are more expensive than PC's. Generally these comparisons involve "generic no-name" Intel or AMD boxen. It's unfair, and you know it. Compare instead comparable machines from Dell or Compaq. This isn't about price either! You can spend as little as $800 for a bottom-of-line iMac (from a mail-order house), OR you can have Apple custom-build you a tricked-out G4 with 150G of RAM (no, I'm not kidding) and spend $15,000 (or more). As far as I can see, Apple has all the price ranges covered.

    You want to be a Linux-advocate, great. So do I. However, FUD is not advocacy.
    </flame>

    I wish they had the option of leaving off the graphics card, though. In the context of an Appleseed cluster it's a waste.

    I want "flavored" G4's too. That slate grey is boring.

    --B
  • OR you can have Apple custom-build you a tricked-out G4 with 150G of RAM (no, I'm not kidding)

    Actually, yes you are.


    typos happen.
  • While it's true that a saturated ethernet will slow down and become less efficient, this isn't a problem for most beowulf clusters. Most smaller clusters that use ethernet have it all setup with a switch. The switch has an aggregate bandwidth much greater than that of normal fast ethernet, and uses store&forward, backpressure, and flow control to keep the network working. The result is that you don't see the collisions that make un-switched ethernet slow down.
    For instance, this is from an interface our beowulf cluster, which has a switch.

    RX packets:12889863 errors:0 dropped:0 overruns:0 frame:0
    TX packets:13388470 errors:2 dropped:0 overruns:2 carrier:0
    collisions:0 txqueuelen:100
    Notice how after 13 million packets, there are none collisions. Next, here is an unswitched interface on a firewall machine.

    RX packets:28635382 errors:18519 dropped:0 overruns:18354 frame:18519
    TX packets:20344300 errors:0 dropped:0 overruns:0 carrier:61
    collisions:1364754 txqueuelen:100
    As you can see, quite a few collisions.

    From what I've read, their code doesn't need a lot of interprocess communication, so they can get by with just one or two ethernet channels.

  • Sorry to burst your bubble, but getting a whole bunch of old 486s together isn't going to instantly give you stellar SETI@Home o Distributed.net scores... :)

    erm.. isn't Distributed.net already, well, distributed? ;-)

    Same goes for SETI.. Tho' if you run more than 1 client you end up getting the same blocks unless you open another account, and make yourself a team or something.. sigh.. If it weren't for that, SETI @home would be on every computer in sight ;-)
    --

  • AC says: "the diffrence between piii's and pii's at the same speed is almost zero"

    except that because of the economics of volume production, at least a few weeks ago, every 450 MHz PII I found was about $20 or so more than the 450-PIII at the same shop... if the PII's had been significantly cheaper, I would have gotten that (or a celery...)

  • This was a few weeks (or maybe months...) ago, checking a bunch of the local computer stores (chains, a few of the smaller shops in town) for prices on the boxed PII and PIII processors...

    I think I remember the PIII prices being ~$270 and the PII's being ~$290 at the time... Actually, I'm sure it was a few months ago. Anyways, there wasn't a significant differance in price over the week or two I was looking (I think the online prices were about $20-$30 less for the PIII and maybe $10 less for the PII), so I decided to go ahead w/ the PIII.

    I figured, "What do I have to lose? $20 and the nonexistance of an extra 'I'?"

    BTW: I generally use CNet's shopper.com [shopper.com], so I'm not sure if it's price searches are generally better or worse than PriceWatch [pricewatch.com]...

  • Umm... About those 450 MHz PIII's - I can assure you they exist: I typed this comment on a dual-processor box with two of them.

    Now, the question still stands: why didn't they do any tests with a 450 PIII machine? I would be interested in seeing a comparison of their results with PIII's at the same clock speed...

  • MFLOPS? Don't Cray's go into gigaflop range?

    megaflop: 10^6 floating point calculations per sec
    gigaflop: 10^9 floating point calculations per sec
    teraflop: 10^12 floating point calculations per sec

  • Get the quote right in your sig. It's Gandi:

    Q: What do you think of British Civilization?
    A: I think it would be a good idea.
  • Well, being optimized for the cluster, as far as I can tell, means splitting up the program into at least as many processes as there are processors, and at the same time minimizing the amount of IPC.

    Distributed.net does this already.
  • It dosen't make sense to make a beowulf-aware distributed.net client. Why not just run distributed.net on each of the machines?

    Alternatively, if you already have your beowulf set up and want to use it for distributed.net, then just set the number of threads to the total number of processors in the cluster.
  • Why did the the post saying the exact thing above get moderated down, yet this one gets moderated up?
  • <i>Also, G4's are $1600 and up, not $2000. The older G3's are substatially cheaper.</i>
    <p>A $1600 G4 is the 350 MHz variant. Why aren't they comparing those? Maybe because 350MHz ist TWO YEARS OLD TECHNOLOGY in the x86 World???? 450 MHz G4 are more like 5000$. And your "substatially cheaper" G3s aren't even available commercially any more, except on the used market. So what's the point?
    <p>Next time think before you cry out, f*cking Mac Zealot!!!!!!!
  • I'm sure you mispelled. Would't that be THE WORLD'S UGLIEST BE0W0LF CLUSTER???????
  • I was wondering the same thing. I found
    the following site:

    http://www.blacklablinux.com/

    It deffinitly looks as though they will sell you
    a cluster, but I was not able to find anything
    on research.

    I am betting that the cost/performance ratio for
    PPC based machines is not low enough for groups to
    really take a look just yet.

    It would be nifty to see the appleseed project try
    it out.

    -nacks
  • One other thing that I would add is that all of
    the benchmarks that they used only used up to
    8 processors... not exactly a fair comparison.
    The whole point of a T3E is to be scalable. I
    would like to see them try to scale a beowulf
    cluster up to 1024+ processors (it is not going
    to happen without some VERY specialized networking
    which would sort of defeat the purpose).

    for a better comparison, one of the new SV1's
    (like Cris said) or even an older J90 series would
    be better.

    -nacks
  • Actually, I have spoken to Cray sales people and
    they would sell T3E's even larger than 2048 PE's
    but no one (that will own up to it...nsa may have
    one) has asked for them.

    The T3E architecture scaling limits have not yet
    been met (spoke with a Cray tech that mentioned
    a 4096 PE order once).

    -nacks
  • G4 trumps Pll, Cool. Now for the offtopic part.

    Actually you are 11.79042138 times cooler than the
    poster you seem to have a problem with, and only
    11.31390384 times cooler than myself (user ID 60948)
    That clarified, I am suprised at your reation to the initial post (cid #1) which didn't seem very inflamatory to me, just a little mis-informed.
    We really shouldn't take comments
    about our chosen platforms personally.
    And I am at a loss to understand why a poster's user ID # would have any bearing on a discussion. If I am
    missing something please illuminate me.

    Thanks

    Kent
  • You could set a better example if you ignored the flames. You seem pretty knowledgable, but also pretty hostile...the posts you choose to rip into aren't going to influence anyone anyway.

    Maybe /. isn't your cup of tea--not that Mac oriented, and what little there is seems to piss you off.

    In fact, Apple is pretty much antithetical to the /. Linux/OS-oriented ethos: closed hardware (not even PPC anymore). Closed OS. Closed Software. Charging a royalty for firewire. Those annoying "upgrage to quicktime pro" nags. Still, the hardware make prety good Linux boxes.

    Apple execs must thank god every night for Adobe, otherwise, what would be the point?

    The most impressive thing about the cluster, and about the Apple G3s used, is their ability to do parallel processing with comparatively little setup. The developers had to write some custom code, but not on the scale of Beowulf. That was pretty neat. The benchmarks were utterly bogus--they should have thrown in an IBM-360, maybe the ENIAC. 8 processor Crays, sheesh.
  • <next morning>
    Sorry. My first post was trying to be funny. My "cup-o-tea" response was a flame. You didn't deserve it.

    Shouldn't have opened the 2nd bottle of tequila...

    Without the Mac, there would be no Windows.

    Without windows, PCs would still be few and expensive.

    A broad installed based of Windows PCs (linked by the internet) makes Linux possible.

    How else do you explain RedHat's stock price?
  • I wonder why they would do this with any Mac OS... and apple talk... wouldn't the scalability be lost with the message passing... and using apples tcp/ip stack isn't the best, either... so I think...

    They used the MacOS because the machines could still be useful to the staff and students when they were not being used by the cluster.

    They didn't use Appletalk. Macs come from the factory with Ethernet. Later Macs have 10/100baseT standard, with Gig available. Is that fast enough for you?

    You think that they would have better performance with a better setup... why cripple it from the begining with the Mac OS?

    Well then, go right ahead and do that.

    You might try reading the Appleseed article first, though.


    --

  • They're running off a RevA iMac (233MHz).
    --
  • Apple has historically mostly aimed its products at people working with digital imaging, publishing etc. These people are generally of the creative sort and don't want to be troubled with the details of using the computer but rather using it for productive work. Btw. moving those 100+meg images around does require considerable computing power which you can harness from the latest cpus used in apples computers. So why not try and put them into a cluster and see what kind of a power you can really unleash from underneath that userfriendly interface when all the beauty is taken away.
  • > Same goes for SETI.. Tho' if you run more than
    > 1 client you end up getting the same blocks
    > unless

    No, you don't. It *may* happen that you get the same blocks twice for some reasons, but generally you get different blocks.

    I'm running two clients on two different computers with one account for ~5 months now.
  • CmdtTaco == Commandant Taco?

    Pablo Nevares, "the freshmaker".
  • Didn't Cray go out of business?
  • by Crixus ( 97721 )
    What more is there so say other than:

    COOL.

    Any supercomputer is OK by me.

  • The UCLA version is dotted already. BR>
    Here's a cache. [google.com]
  • Well, can you actually tell me the difference between parallel and distributed computing? I mean, there IS a difference, right?

    I think, the way I see it, distributed computing (in the sense of distributed.net and seti@home) is that the problem is easily broken up into repeating but unique parts. The kind of problems being solved using this architecture is not serial (wait, then it's parallel...isn't it?)

    so what IS the difference? I'm pleading ignorance.

  • So...can we say that they are the same, except parallel computing is used generally to categorize a system where the processors are connected with high-speed in real-time, and consequently, the processors are more homogeneous and the environment more controlled, where as distributed computing can be more lax on all of these requirements?
  • Right, I understand that, now, because the overhead for interprocessor communication would defeat the purpose. But still, if you want to use beowulf, doesn't the application need to optimized for the cluster? Would distributed.net client take advantage of the beowulf cluster without modification, just by setting the threads?
  • how come they don't make parallel computer versions of the distributed computing programs? Or do they? You know, like if they created a version of say, seti@home or distributed.net to run on a Beowulf cluster?

    Come to think of it, if they can keep the data transmission routines close-sourced, with some validation routine to make sure that the results being generated from the portion of the program that does the calculation, then they could open-source the calculation part to be ported to parallel or clustered systems. I personally think it's better than running some fractal demo program...

    Still, isn't a G3 or G4 Macintosh Beowulf cluster somewhat more expensive than an equivalently powered Intel-based Beowulf cluster?

  • That was the "other" Cray. I believe it was called "Cray Supercomputer". The Cray most people know is Cray Research which started in Chippewa Falls, Wisconsin and expanded to Eagan, MN (where I work for its remnants). The "other" Cray was started when Seymore Cray left Cray Research. Rumor has it that Cray Supercomputer (I don't think that's the name anymore) has recently shipped a prototype. Cray Research is (assuming SGI gets it done) going to become a seperate company again and has a *very* cool product coming down, which will blow this cluster off the map :) (Since I'm on the SGI side of the split I am obligated to say that we also have a product coming down that is very cool and will take on this cluster very nicely. And not only that, a later version of it will run Linux - you could really make a nice Beow....oh nevermind :)
  • Guess Crays aren't quite what they used to be. Maybe they should make them in grape colors :)

    Actually Crays are (or at least were) available in your choice of colors. When I went on a tour of NCAR in Boulder the guides made a joke out of the fact that they were. Funny that choice of color is now considered to be a big thing.

  • Ooh, and lets call them iCrays and make retro commercials about them.

    While we're at it, why don't we designate a corner of every CompUSA to the iCray. :)

    Sorry to all the Apple folks out there, no offense intended. I just thought this was funny.

    kwsNI

  • And you can buy a K7 and overclock it to 1Ghz... what's your point?
  • There is absolutely no need to get personal here, but it seems like Apple is a company which has historically aimed its' products at the customers who like userfirendly interfaces, windows, buttons, mice, etc.

    True, true. That's almost everyone, though.

    They have not been known for their great performance.

    Um... not quite. Ever since the PPC was invented, it's always been faster than whatever Intel-based chip was on the market at the time, assuming identical clockrate. The exception was the 601, because you couldn't get equal clockrates; the minimum speed for the 601 was 60 MHz; 486's never went that fast the the Pentium didn't come out for a few months after the 601 did. With the old 68K-based Macs, you have something (though again, the first Macs were faster than the first Intel-based machines, though Intel would catch up and then take the lead once the PC market began to explode).

    Their latest product is the iMac.

    Huh? Three years ago, their latest product was the iMac. They've only completely revamped their product line since then. And they've introduced products since the latest iMac revisions too (Revision D).

    Why then go and attempt to build a high-performance machine?

    Why not? Apple's got what it takes to do it. Besides which, people like high performance, especially gamers, and Apple's kind of trying to cater to the gamer market now. Note that I said kind of; Apple seems to have a love/hate relationship with gamers. It's well-known that back in the earliest years, Steve Jobs discouraged games for the Mac, because he didn't want it to be seen as a toy. If you ask me, he's still afraid of that, but he's starting to recognize that games are important for a platform's health. I just wish he'd be a bit more enthusiastic about it; as it is it's rather clear that the Apple talks about gaming only grudgingly. It would still rather not have games on the Mac platform but it sees them as a "necessary evil."

    And yes, I do think that little hangup of Apple's is completely and totally insane.
  • Processing the burnt, charred remains of obnoxious anonymous cowards and trolls such as yourself. Take your Mac-bashing elsewhere.

    Giving up, indeed. Have you checked Apple's stock price lately?

    - Jeff A. Campbell
    - VelociNews (http://www.velocinews.com [velocinews.com])
  • ---
    Their latest product is the iMac.
    ---

    Since when is their latest product the iMac? I think you need to keep up on the press releases.

    ---
    Why then go and attempt to build a high-performance machine?
    ---

    Why not? There are a lot of things posted to Slashdot that don't make for the most sensible possibilities. Remember, there are people that come here to learn about lego guns and spend time cooling their systems in beer for the humor value.

    That said, an easy to use system of this kind might be of interest. Losing a few percentage points in the speed department (which I agree with you, is likely with the current MacOS) could very well be made up for by the ease of administration and setup. That is, if you want to move this kind of technology out of universities and into the Real World.

    ---
    No flamewar, please.
    ---

    None intended.

    - Jeff A. Campbell
    - VelociNews (http://www.velocinews.com [velocinews.com])
  • Don't procede to tell me if Slashdot is my cup of tea. You could multiple my user ID (5387) 12 times and still be under yours (63515), so I think I know a think or two about the people and content here.

    How many posts from me have you seen to make these all encompassing statements? Are you basing this on my being a Mac user and a couple of posts, or have you been stalking me for the last few months?

    ...

    First, the Mac has become LESS closed in recent years than it has been in the past (minus the lack of cloning, which I don't care for, but understand the business reasons of). They've opened as much of their OS up as they can without putting themselves out of business, current Macs (with the exception of the consumer line) are fairly expandable, and so on. As for Firewire, given that they spent a lot of cash to develop it, what's the problem there? A quarter isn't exactly a lot to ask, is it?

    Second, this is 'News For Nerds, Stuff That Matters' - NOT 'News About Linux, All Other Opinions Worthless'. Are you seriously suggesting that anyone who doesn't tow the Linux-user political line doesn't belong here? That sounds like conformity my friend, the same trait that many Mac and Linux users have traditionally railed against.

    Your posting history shows that you're not an idiot, despite those opinions we don't share and a general disdain for computer aesthetics and usability. However, you really should open your mind up a little bit and reconsider your stereotype of Mac users. Quite often, they don't fit. Even worse, they smack of elitism.

    [Note: This is coming from someone who can be found booting into BeOS, LinuxPPC, and MacOS 9 on any given day - with OSX being added to the list when the time comes. Try expanding your horizons, life is much better that way]

    - Jeff A. Campbell
    - VelociNews (http://www.velocinews.com [velocinews.com])
  • [quote]
    You could multiple my user ID (5387) 12 times and still be under yours (63515)
    [/quote]

    ...and yet you could probably spell and multiply better than I. *sigh*

    'multiple' = 'multiply'
    '12' = '11'

    - Jeff A. Campbell
    - VelociNews (http://www.velocinews.com [velocinews.com])
  • Makes me wish for cheap PPC boxes. Have any of the cheap IBM reference designs been manufactured yet?
  • Keep in mind that a Cray can push through massive calculations in a situation non-suitable to parallel processing (i.e. very large linear system solutions) which a beowulf of any type is unsuited for.
  • Just out of curiosity, has anyone tried using G3/G4 hardware to build a cluster using Linux as the operating system? (In case you didn't bother to read the article, the Appleseed project uses MacOS 8 with a special control panel and process distribution application.) I'm not saying there is anything wrong with MacOS, I'm just curious as to how it stacks up against a Macintosh based Linux distribution for parallel computing.

    On a side note, I found it interesting that this page provided parallel computing APIs for C and Fortran, as well as explaining what can be done with them. Most Beowulf related pages I have seen in the past don't really go into this, leading some people to the incorrect conclusion that any software can be run on a cluster without modification for instant speed boosts. Sorry to burst your bubble, but getting a whole bunch of old 486s together isn't going to instantly give you stellar SETI@Home or Distributed.net scores... :) Kudos to the Appleseed project!

  • Um... not quite. Ever since the PPC was invented, it's always been faster than whatever Intel-based chip was on the market at the time, assuming identical clockrate. The exception was the 601, because you couldn't get equal clockrates; the minimum speed for the 601 was 60 MHz; 486's never went that fast the the Pentium didn't come out for a few months after the 601 did.

    Actually, a couple of points here are not quite right (though your figures are more accurate than the previous figures quoted).

    The 486 went up to 100 MHz core clock (3:1 CPU to bus ratio). These were sold as "DX4-100" chips (the "4" is the product of marketing). Actually, IIRC AMD offered a DX4-120 (their best offerings in those days ran on a 40 MHz bus). Time frame almost certainly overlaps heavily with the 601; I would have to do more research to quote date/clock frequency points for either line.

    Performace-wise, I've mainly relied on SPEC benchmarks (www.spec.org). These are pretty much the canonical measures of performance for real CPUs (and desktop CPUs as well, which are asymptotically approaching workstation-class). By insisting on the same tests (compiled with the tester's choice of compiler) and on full disclosure of the test systems used, they are as close to vendor-neutral as we're likely to get.

    x86 and PPC based machines benchmark at roughly the same speed at any given time in SPEC history. Clock frequencies aren't the same, but that's irrelevant - performance is what matters. While PPC was certainly fast, and definitely has a cleaner architecture than x86, it failed to substantially outperform x86 (and conversely, x86 failed to outperform PPC).

    Where things get interesting is the G3 and G4. There has been a suspicious dearth of SPEC information from Apple in recent months/years, and a strong outpouring of questionable benchmarks quoted by their marketing departments (most bizzare was the "1.5 clocks/pixel vs. 200 clocks/pixel" filtering quote, debunked on Slashdot by a few people who provided far faster x86 code). The G3 and G4 are most certainly excellent processors, but Apple has failed to put believable numbers behind them when quoting benchmarks.

    What I'd really like to see is an independent testing of SPEC marks. This would be do-able on any of the *NIX variants currently running on Gx, and would be quite straightforward on MacOS X. The problem with independent benchmarking is that Apple is best qualified to produce a compiler for the G3/G4. If a PPC based *NIX group tried it, their numbers would most likely be lower than optimal because the compiler wouldn't optimize as well as it might be able to. It would still be interesting as a data point, though.

    Before anyone objects that SIMD instructions (like AltiVec and SSE) are difficult to compile to, I'd like to point out that loop unrolling optimizations take you half way there already.

    Summary: In the past, advocates of both architectures have failed to prove that their architecture trounces the other. IMO, current _meaningful_ bickering is hampered by a lack of SPECmarks.
  • Maybe because 350MHz ist TWO YEARS OLD TECHNOLOGY in the x86 World????

    Clock speed will often vary quite widely between architectures; this does not directly affect performance (look at Sparc chips, for example; similar SPEC marks to x86 chips at much lower clock rate).

    Performance is based both on clock rate and on how much work is done per clock. This in turn is affected by how pipelining on the chip was set up, and many other things.

    A good reference on the subject is "Computer Architecture: A Quantitative Approach", by Hennessy and Patterson (published by Morgan Kaufmann).
  • Current Macs use IDE, SDRAM, PCI, AGP and several dozen other acronyms.

    I've never heard of Macs with AGP ports, and until going back to university I was working for a graphics driver development company. We would have been overjoyed to have AGP Macs to write drivers for.
  • It costs time and effort to learn what you need to know to build a system, to pick out components, and to actually put the thing together. And when you sum it all up, it's not such a small amount of time and effort. So unless you're in the business of PC hardware support and thus have to possess all this knowledge anyways, then you're getting a *lousy* deal.

    It takes me five minutes looking over a parts sheet to decide what I want in an x86 system. I go down to the store, and say "build this for me". I come back a couple of days later, take it home, spend another five minutes attaching cables, and it goes (well, then there's Linux installation, but if I was feeling masochistic I could get Windows pre-loaded).

    I used to build my own machines from parts as a hobby. If it's fun, it isn't "cost". I switched to paying for pre-assembly when it became less fun. Cost increase is minimal.
  • IP may be more flexible but I think AppleTalk/EtherTalk still has an advantage in ease of use. You plug the computers into the LAN and the networking software automagically configures itself. That's nice for small networks.
  • There's no fundamental difference between the two. Both involve breaking a problem down into independent parts, solving those parts, then recombining. The difference comes is in the speed at which this process must happen. With distributed.net, seti@home, and other "distributed" applications, you can grab your task and sit on it for hours or days at a time before communication with other nodes is necessary. In a parallel setup such as the article mentions, communication has to happen much more often, and it must transmit much greater amounts of data. Let's take for example climate modeling. You could perhaps break the planet up into tiny cells, and each processor gets a cell to work on, but they're going to have to talk to each other to find out what's going on in the other cells pretty often. This is also why Crays and the like are used instead of just buying a thousand PCs, because the communications infrastructure is far, far more advanced and allows applications where communication needs to be very rapid.
  • An UltraSPARC workstation is by definition NOT a PC. PC stands for personal computer, an inexpensive low power system, any UltraSPARC is automatically not low price or underpowered. As for Macs, the entire Power Mac family is basically PC compatible. The main different between a Compaq PC and a Mac is the difference in the chipset. Current Macs use IDE, SDRAM, PCI, AGP and several dozen other acronyms. It is true that before the Power Mac Apple used NuBus and such proprietary periphrial connections but not anymore and even then its hard drives and CD-ROMs were SCSI. I don't know how much more open Apple could be on its systems. A PCI card will work in both a Mac and PC if you have the appropriate drivers for the device. Its similar to the complain many Linux users have about vendors, they only support Windows. Older Power Macs even had an entire PC subsystem in them so you could run Windows from within MacOS, namely the Power Mac 7300/180 PC compatible which had a Pentium 166 along with its PPC 604e. It could run PC apps on the native x86 ISA inside MacOS. You can still buy PCI add-in cards with an x86 subsystem for running Virtual PC. Read up before you say Macs are 100% proprietary.
  • I'm so sick of seeing everyone carp about Macs not being upgradable. Ohhh you built your own PC you must be a technical genius...oh yeah you plugged some hardware together and stuck in a boot floppy, big damn whoop. One of the beauties of the entire Power Macintosh line was its PCI bus rather than the older and less cool NuBus. Besides the PCI bus which was compliant to all PCI standards was the face that the system critical stuff resided on a piece of silicon called the logic board. Something companies like Newer and Sonnet have done is make what they call "upgrade cards". In a real simple operation one can take an aging Power Mac and turn it into a pretty fast G3 or G4 system just be replacing the old logic board with a new one. The enhanced systems use the old memory stuff but the speeds are comparable to newer systems. For a total of about 800$ you can buy an older 9600 Power Mac on ebay and install a brand new G3/4 board on it. Most of the other parts are replacable since they use a SCSI or ATA bus. I can even put USB ports in said 9600 system. I used to think Macs weren't upgradable until I actually looked into it. Hell I could get said 9600 system to run Windows 98, 2000, or even x86 Linux. This post is a bit off topic but so are the "I'll never touch a Mac" people. I think the Appleseed project gives alot of credibility to the claims Apple and Motorola have made about the G4. Next they should make one of these clusters using iBooks, a room full of those buggers would be eerie.
  • Most benchmarks show the fastest G4 being somewhat slower than the fastest PIII - such as these from www.macinfo.de [macinfo.de]
  • by / ( 33804 )
    Anonymous Coward: I wonder what it'd be like to build a beowulf out of these!
    /: It is a frickin' beowulf, you turd!
    Anonymous Coward: Oh yeah, right.

    The build-a-beowulf joke has officially been beaten into the ground for the last time. Anyone who uses it at a future date will be liable for being beaten into the ground. You have been foreworned.
  • take a look at the latest top500 list:

    http://www.top500.org/lists/TOP500List.php3?Y=19 99&M=11

    Cray is not dead just yet....

    Comparing a beowulf cluster to a cray is just silly.

    -nacks
  • "What are the clusters user for, BTW?"

    A1: Playing Quicktime movies of an Aibo, really, really, fast.

    A2: Trading Apple stock options.

    hehe, "checked out their stock latey?". Like that has anything to do with anything.
  • Includes the recipe for making your own Apple-flavored Beowulf cluster. "

    Aww man, you just took the fun out us of making Beowulf jokes.</silly>

    On a more serious note, are the comparisons fair? They seem to be using the same mhz values, but how well does a 450 mhz p2 compare with an apple g4? Most people wouldn't try to compare intel chips with alphas, for example. One way to determine this would be to try doing similar but simpler problems on single intel and single mac computers, and seeing how these two setups compared.

    Why p2's and not p3's?

    What parellel software are they using on the intel computers?

    Were they able to determine why the apple computers run better in parellel than the intel computers? Was it because the intel computers ended up saturating the lines between them?

    --

  • I think the idea was to see how easy it would be to set up a cluster with MacOS (MacOS is known for making things pretty simple.) I thought they were just using straight IP over Ethernet; does anybody use Appletalk anymore? That stuff was damn cool for it's time, but IP is obviously much more flexible. On another note, I'd like to see them make a LinuxPPC [linuxppc.org] Beaowulf out of the same boxes and see how the results compare to doing it under MacOS.
  • Very true. Like I said, Apple is known for making things simple. The trouble is, you can never get at the underlying stuff and poke around. :) I love MacOS because of how easy it is to maintain, and I love Linux cause of how much I can explore and change things.
  • This isn't a Beowulf cluster. If you read the article they are using their own program to split processing between computers. It sounds to me like the techniques used are not so flexible (nor scalable) as Beowulf.

    Nonethless it is a good demonstration of high processing power for low prices on machines other than x86s. (Alphas are too damn expensive). What's interesting is that they recommend further reading so that one can setup a Beowulf. I wonder how a Beowulf would perform in comparison (using the same number of computers etc). I'm guessing slower, becaus ethe scalabilityf also means a possibly higher protocol overhead.

    --nullity--

    I am nothing.
  • No offense, but...duh.

    I certainly have nothing against Mac hardware. The PPC processor is cooler and more elegant than what AMD is forced to use because of silly x86 compatibility. Yes, it sucked for people when Apple said "we're cutting everyone who ever bought computers from us off", but people got over it, and the PPC is much better for it.

    However, those sorts of machines haven't really been practical for me to use. I'm still in college, and so I don't have lots of money around. Plus, I enjoy building my own systems, which is something I really can't do with PPC. Thanks to the modularity of PCs, I can upgrade my computer one bit at a time, cycling the parts down through various levels (I've got lots of little side projects, plus machines for my parents and brother, and juggling parts is fun :)

    x86 has been the answer for me; with a decent hs/fan, my K6-3 runs plenty fast and cool for all my needs. Would a PPC system be nice? Surely. Do I really want to spend more money on a system that pretty much has to stay in one piece? Not really.

  • I build my own machines, because if anyone else did it for me, then they'd invariably do something wrong :) So, mine are probably a bit more expensive than your super-cheap PCs.

    However, I'm certain that you can get more computing power from $800 worth of PC parts than an $800 iMac will provide. Also, I discredit the iMac, because it retains the thing I used to hate most about Mac hardware (the G4s are much better about this): non-modularity. As I said, I don't "buy computers". I buy a new video card if I want one, then shuffle my old one over here, the one it replaced over there, etc. My modem and floppy drive have lasted me through 4 generations of CPU and RAM upgrades, because they are still perfectly good. My sound card has lasted 3 generations, etc. The video card is less than 6 months old, and the CPU even newer. You still really don't have that flexibility with Mac hardware, and I would miss that too much.

    Also, you pretty much buy Macs from Apple. Apple decides that you can't get floppy drives any more, and that you must get DVD-ROM or RAM (no option of CD-ROM), and that your smallest hard disk option is 10GB. What if I want to buy a bunch of cluster nodes? I DON'T want to spend money on:

    1. Zip drives
    2. More than 1-2 GB hard disk (fs will likely be distributed)
    3. CD-ROM/DVD/whatever drives (I would do one unit, then make copies of the hard disk)
    4. Big pile of keyboards and mice
    5. Fancy video cards which will be displaying text for setup, then NOTHING unless they need maintenence.

    With PCs, I can save that money by not buying those parts (buying super-cheap video cards in the case of #5), and put that money into faster CPUs, more RAM, etc. Or maybe I want to use that money to go see Depeche Mode!! :) Whatever.

    There are projects starting with non-Apple PPC stuff, and I'm paying close attention to them; PPC chips are cool and efficient, and I'd love to have them in something I consider useful.

  • The real important question is what would be the best flavor for a cluster? I dunno if grapes and limes go together all that well.

    Actually, this is great to see, too bad for Cray though they used to kick some serious ass in this sort of head to head processing prowess.

  • Yeah, it's not the same as it used to be. Crays HQ was just down the road from my house in Colorado Springs and they still seemed to be making a profit. Then when Mr. Cray pass away the company never quite recovered.
  • Prophet Systems builds [eternalcomputing.com] PPC machines and components based on IBM's reference designs.
  • Grape of course - they occur naturally in bunches.
  • by Chris Frost ( 159 ) <chris@frostnet.net> on Friday February 04, 2000 @04:53PM (#1304373) Homepage
    Bad pun there, but couldn't resist.

    The crays they compare to are pretty old beasts, and they only tested with a few processors (Cray's SV1 for example can take advantage of over 1200 cpus!).

    Drop by http://www.sgi.com/sv1/tech_info.html
    (or http://www.cray.com/) to see info on the SV1 if you're interested.

    Now, don't get me wrong; this is a very nice cluster, but them seem to unfairly compare it to a cray (the t3e-900 is not even a recent machine!). I'm sure someone else will explain where computers such as crays and sgis come into real use (high-throughoutput work), but for distributed systems requiring less than gigatnic amounts of communication bandwidth, beowulfs do handle many kinds of tasks very well (and cheaply!).

    Just didn't want eveyone to think a 16-node g4/g3 cluster was faster than a cray (actually, the sv1 can use cpus /each/ capable of 4.88GFLOPS).
  • by Darchmare ( 5387 ) on Friday February 04, 2000 @04:08PM (#1304374)
    There's a somewhat humorous portion to the instructions, although I'm not sure it was intended. Check them out:

    http://exodus.physics .ucla.edu/appleseed/appleseedrecipe.html [ucla.edu]

    Setting this up is as easy as 1, 2, 3 apparently (despite, well, paying for everything). After a 3 step process, they put a little note at the bottom:

    "Note: To build a Beowulf, a Linux-based cluster, we think the following 230-page book is an excellent introduction: T. L. Sterling, J. Salmon, D. J. Becker, and D. F. Savarese, How to Build a Beowulf, [MIT Press, Cambridge, MA, USA, 1999]."

    A 230 page introduction? :>

    - Jeff A. Campbell
    - VelociNews (http://www.velocinews.com [velocinews.com])
  • A beowulf cluster can be assembled with *multiple* network cards to decrease the network distance between each processor. Basically instead of the machines sharing a single network, there are several separate networks to split the traffic. The reason for this is that as traffic on Ethernet rises, it reaches a point where it hits a wall and throughput can really decline fast.

    Appleseed is set up using the internal ethernet card (though I would guess you could use a different interface like a fiber optic connection) connected in the usual fashion to a regular switch. The article didn't mention any option to install more network cards and use those.

    Now, for most things a shared 100M network will be suficient. Depending on your applications I would guess that a beowulf would be more configurable. If I were to make a 1024 node cluster, it would be a beowulf with the nodes arranged into a hypercube. Putting 1024 Macs onto a single beowulf might cause performance problems depending on what you're doing. Usually programs that don't require a lot of communications between nodes run best on beowulf type clusters, so the problem of having only one network card in a machine might be no big deal after all.

  • by Guppy ( 12314 ) on Friday February 04, 2000 @05:08PM (#1304376)
    What I'd like to see is a Beowulf cluster of iMacs -- one of each color, arranged in a little circle. It may not be as fast as a Cray, but now you've got the world's cutest Beowulf cluster!
  • by Kaufmann ( 16976 ) <rnedal@NOSpAM.olimpo.com.br> on Friday February 04, 2000 @04:49PM (#1304377) Homepage
    Okay. As cool as this whole shebang sounds (and it does sound pretty damn cool), aren't we usually the ones who starts yelling "benchmarks are meaningless" whenever the guys in the Microsoft trenches pull off another one from their files? I say, stick with that position. Better a false negative than a false positive. So I'm sorry, but I don't care about these benchmarks anymore than I care about any of the Mindcraft series, and that's that.

    (Not that I wouldn't like a nice cluster of Macs, mind you. Ummm. Tasty.)
  • by / ( 33804 ) on Friday February 04, 2000 @04:18PM (#1304378)
    Note: To build a Beowulf, a Linux-based cluster, we think...

    The funny part is that the slashdot story-posting perl scripts didn't post this story twice for mentioning both linux and beowulfs.

    C'mon, CmdtTaco! Release the source to the story-posting perl scripts, already! ;)

  • by Sharkey [BAMF] ( 139571 ) on Friday February 04, 2000 @04:07PM (#1304379) Homepage
    Guess Crays aren't quite what they used to be. Maybe they should make them in grape colors :) Sharkey
    http://www.badassmofo.com [badassmofo.com]
  • by dmelomed ( 148666 ) on Friday February 04, 2000 @06:51PM (#1304380)
    I actually had an honor of working with one of these clusters at a famous university Plasma Physics Lab. Several points here. Do not forget that the benchmarks advertized are for UCLA's particular gyrokynetic code done in F77. The gyrokinetic code usually doesn't require a lot of communication anyway, so for small clusters slow networking is not much of a problem. That is why Beowulf clusters are so suitable for problems where you don't have much inter-node traffic. Crays use very high bandwidth interconnects which are expensive and not needed for particle code like this. The difference is in code implementation and the CPU. Also Crays use Alpha as their processor, and Alphas are very good at FP intensive code, but they need a lot of code tweaking to squeeze all of the performance Alpha can give. Once the code runs great on an Alpha, put it on a different CPU and you have it crawling (and vice/versa). The lab that I worked for had Fortran 90 gyrokinetic code which was basically accomplishing the same thing as UCLA's, BUT it ran 3 times faster on a 400 MHz Alpha, than on G3 350 MHz using AppleSeed. Network-wise scaling it on G3s was not a big problem (small cluster, not much IP traffic), should note MacOS would be unresponsive during the benchmarks completely; while the code was running (all of CPU was devoted to it, and MacOS doesn't do preemtive multitasking). Surprizingly UCLA's code runs very nicely on G3, just as fast, or better than it does on an Alpha, that's why Macs are so suitable for them (besides plug-n-play Beowulf factor). So I think comparing their results to an archaic Cray is a nice way of attracting attention, but when it comes to details it's just another Beowulf. Put a different code that does well on Cray on it, and it's 2/3s slower per processor. Though I must say, that availability of this kind of software is great, since research facilities have tons of Macs and CPU cycles. This software also eliminates a need for *nix sysadmining, but it is costly. Fortran compilers are expensive, and so are Macs. If I would be building a cluster and didn't have Macs off hand, I would use cheap PC labor and Linux (though would still have to pay around a grand for F90 compiler per license :(). What *nix offers is compatibility, flexibility, and preemtive multitasking, and it allows to run several parallel jobs at a time. Traditionally Beowulf software was written for *nix, and so many MPI implementations and other essential software (like FFFTW and other math/scientific libraries) are available primarily for *nix, and would have to be ported to MacOS or any other OS.
  • by Richard Mills ( 17522 ) on Friday February 04, 2000 @06:20PM (#1304381)
    Don't believe all this hype and go sell your Crays just yet. What many people fail to realize is that the total number of achievable MFLOPS of all the nodes in a parallel machine IS NOT a very meaningful measure of how powerful or useful the machine is. This ignores the nature of the interconnect between the processors, memory, etc., which is *extremely* important in most parallel computations, and is what makes supercomputers so damned expensive. This stuff is not Ethernet! For many types of parallel applications, Ethernet becomes such a bottleneck that no advantages can be realized from parallelizing an application.

    The generation of fractal clusters is a classic example of what are known as "embarassingly parallel" problems in parallel computing circles. As you iterate points in the set, their evolution is independent, so a minimum of message passing is required. (In computer science-ese, "the computational graph is disconnected"). With even the crummiest of interconnects, you can get good results out of parallelizing these fractal cluster generators because the only thing that will really make a difference is the total number of FLOPS acheivable by each of the nodes. Fractal set generation is just not a very meaningful benchmark.

    But consider, say, a finite-element model where every point in your grid is affected by its neighbors. Then you need to do lots of message passing, and the nature of the interconnect becomes orders of magnitude more important. In this case, I guarantee you that a commercial supercomputer is going to beat the pants off of any cluster machine. This is not to say that cluster machines aren't useful, but a real "supercomputer" still has its place.
  • by levl289 ( 72277 ) on Friday February 04, 2000 @05:52PM (#1304382) Homepage
    I attended ucla as a physics major while this project was still under way, and took a more basic, introduction to computer modelling of plasma systems. The professor doing a lot of the work in this field is John Dawson [ucla.edu]. Along with him, and IIRC, more in charge of the computer systems, is Victor Decyk.
    Decyck taught half of the class, although he was technically a TA. He explained the progression away from high $$ "super computers", such as Crays, and the usefulness of clusters.

    I also had the honor of working at JPL, where Decyk was a part-time scientist in the computing/analysis department for the Experimental Measurment Devices group.
    If you look up something like "computer plasma modelling" on the 'net, you'll very likely find papers by these two...very interesting high-powered stuff - the mind boggles at just how much the computer is crunching when you realize that a large number of the plasma particles are interrelated spatially.

The explanation requiring the fewest assumptions is the most likely to be correct. -- William of Occam

Working...