Cringely Wants A Supercomputer in Every Garage 277
Nate LaCourse writes: "Real good one from Cringely this month. It's on building his own supercomputer, but with some twists." You'll probably also want to check out the KLAT2 homepage to learn more about their Flat Neighborhood Network. And since KLAT2 has been around for nearly a year (check out the poster on this page!), perhaps a 3rd generation is in the works?
Am I the only one? (Score:2, Insightful)
Re:Am I the only one? (Score:1)
WELLL.... (Score:3, Funny)
Well, not to be one of those stick in the mud 'Read the %($#ING article' type people, KLAT2 is a reference to The Day The Earth Stood Still. Had you looked at the articles in question (particularly, the KLAT2 page) you would have discovered that indeed, they were intending the reference. Heck, go check it out - the poster they made up for it is worth the look! :-)
Re:Am I the only one? (Score:1)
Our poster is based on one of the posters for the classic 1951 science fiction movie The Day The Earth Stood Still. Yes, KLAT2 is an obscure reference to Klaatu, the fellow from outer space who came, with the robot Gort, to explain to all the people of the earth that if humans cannot work together in peace, the earth will be destroyed for the good of all planets. Of course, in the movie, Gort didn't have an AMD Athlon logo on his chest and Tux, the Linux penguin, wasn't the actor inside the suit... it's a very good movie anyway.
:)
Re:Am I the only one? (Score:2)
Re:Am I the only one? (Score:3, Informative)
I think it translates to "Klaatu says not", but I'm basing that off of a half-assed knowledge of German ("nikto" -> "nicht"), the similarity between "barata" and "berate", and context. Maybe there's some kinda Latin thing in there somewhere, idunno. It's the instruction Klaatu tells Love Interest to give to Gort the robot, so he won't destroy the entire planet.
And since someone asked, yeah, that's the same line whatsisface has to use in "Army of Darkness", and it was the only part of that lame movie I can remember laughing at.
Now, the question is: Will I get modded down because this is actually offtopic as hell, or because I insulted "Army of Darkness"? %-)
Re:Am I the only one? (Score:2)
It's been years since I saw The Day the Earth Stood Still, but if memory serves, this line meant "Klaatu commands obedience."
yeah, that's the same line whatsisface has to use in "Army of Darkness", and it was the only part of that lame movie I can remember laughing at.
Oh man, either you saw a different movie than I did or else your sense of humor is way different from mine. Oh well, to each his own, I guess.
"Klaatu barada... necktie!"
steveha
Great (Score:3, Funny)
genetic algorithms (Score:1)
It is really cool considering that $6,000 is now enough to take on massive projects. How many months would this machine take to render Final Fantasy: The Spirits Within?
Re:genetic algorithms (Score:1)
Re:genetic algorithms (Score:5, Funny)
And it shall be called. . . Earth!
Re:genetic algorithms (Score:1)
A damm good project (Score:1)
Am i the onlyone who see's the posibilites of this (Score:1)
Re:Am i the onlyone who see's the posibilites of t (Score:1)
I realize you were joking, but seriously, do any current games use SMP to their advantage? What about running games on a MOSIX cluster? Has anyone tried it?
Re:Am i the onlyone who see's the posibilites of t (Score:1)
Other Q3A based games may also use this (RtcW does, but on the demo at least, its _Very_ crashy).
SMP & gaming. (Score:1)
Can you even tell the difference in game play between say a 900mhz and a 1.4?
mostly, it seems to be the the video card that makes the difference, and the ram, not the processor.
Finally... (Score:2, Funny)
Imagine... (Score:5, Funny)
Re:Imagine... (Score:5, Funny)
Re:Imagine... (Score:2)
Now is the time for all good men.... (Score:5, Interesting)
Secondly, UWB seems to be the holy grail of wireless networking, yes, however is this something that the agencies of the world are going to let out of the bag so easily as he says, I can think of the CIA and the NSA having a few choice words about such "undetectable signals" being used by commonfolk after September 11th...
Just my two cents
Re:Now is the time for all good men.... (Score:1)
We won't know until Mr Jobs wants us to know.
Re:Now is the time for all good men.... (Score:4, Interesting)
Okay, we need to burn some spare cycles. Lots of them, in fact. I have some ideas. There may even be a couple in here that can be taken semi-seriously.
* SETI@H. . . Why are you looking at me like that? Admittedly, it's cliched, and I'm the impatient type who figured I'd find my first LGM within a week. Or by the end of the year at the very latest. But I still think that it's a pretty cool thing to be doing. Or load up one of the alternatives like Folding@Home.
* Find a buddy with a similar supercomputer, and have them play chess. Or tic-tac-toe billions of times every second (sorry, War Games flashback).
* There are lots of mathematical problems out there just begging to have a few supercomputers thrown at them. I'm not aware of what they are, so consult your local Mathematics department and offer your services.
* If you're not interested in doing video compression or complex scene rendering, you might be able to find someone who was. Some indie film maker who wants to play with the big kids is going to become your new best friend. Be sure to ask for a walk-on.
* Some sort of AI project could be interesting, providing you have some specialized training. Or you just give someone at MIT telnet access.
Re:Now is the time for all good men.... (Score:2, Interesting)
That'd be much peferable to running some particular piece of code for a week or whatever on a workstation that some bunch of 1st year undergrads are using night and day. (All for one result - then realising you'd made a mistake in said code)
It would speed up research in so many diverse fields.
I've got one ..... (Score:3, Interesting)
Re:Now is the time for all good men.... (Score:2, Interesting)
On the contrary, UWB can not be used for long range communication, so it's not going to replace your cell phone anytime soon. However it's probably the best thing we've got to screen people at airports. This technology can literally see through walls and it can do it without hurting anyone.
Stephan
Re:Now is the time for all good men.... (Score:2, Interesting)
When I was a kid I played with particle systems. I'd set up a cloud of particles with mass and/or electrical charge and see how their simple interactions created large-scale behaviour. It was a simple system that didn't scale well (I made some attempt to break the space up into cubes and treat the contents of far away cubes as one particle, but it wasn't a seamless transition). Even with the limited number of particles I could play with (a few hundred), I still saw a lot of interesting things happen, like the material breaking up into 2 or 3 separate clusters. If I had oodles of CPUs, I'd enjoy figuring out good ways to split the load between them.
In today's society (those societies whose members waste time on Slashdot, anyway), life isn't just about making a living. So in essence, these machines can be used for having fun, which is a good enough reason to make them.
P.S. It's no reason to build a cluster, but if SETI@home doesn't turn you on, perhaps Folding@home [stanford.edu] will.
Re:Now is the time for all good men.... (Score:1)
Are any AVERAGE geeks out there going to be able to make use of this processing power 0r is it ju5t f0r th3 1337 br4gg1n6 |216h+5?
IIRC, Babylon 5 was produced on an Amiga, however I don't know what software package they used.
Re:Now is the time for all good men.... (Score:1)
I don't know what software package they used.
IIRC, it was Video Toaster. I seem to remember seeing that in the B5 credits one of the ~5 times I watched an episode.
Re:Now is the time for all good men.... (Score:2)
They started with Amigas but eventually moved on [cybersite.com.au].
Re:Now is the time for all good men.... (Score:2, Insightful)
Ever wait all day to compile test versions of large software packages? No longer. Ever wish something would go just a little faster? No longer. Ever felt like encrypting all of Usenet history in order to do frequency analysis on the output? You might just finish your tests in this lifetime... the list goes on and on.
The main benefit that I see in looking at this sort of cheap components, high parallelism approach is that a failure in a unit is not fatal to the whole. But that's where I'm a little wary of the whole rigamarole of having to painstakingly compute the best way to connect all these redundant ethernet connections. That doesn't sound very fault tolerant to me. But then maybe it is, just that when a fault appears it slows down the system because it throws off the calculated topology.
HP Did This Too (Score:5, Interesting)
Re:HP Did This Too (Score:1)
$210,000 is not slightly more expensive. KLAT2 costs $41,000 and Cringely used only $6,000. AND it only ranks 385th while KLAT2 ranks 200th.
Sure (Score:2, Offtopic)
Now it is used to place a $41,000 supercomputer! Ph43r m3!!
but then, I wouldn't allow anyone driving a car into my garage(WATCH THAT NETWORK CABLES ON THE GROUND!), so should I build another garage for my real cars?....
Re:Sure (Score:1)
easy cowboy (Score:3, Funny)
Those are some interesting ideas.
Now how about organizing them before publishing them? Call me pre-postmodern (and I'm still in my twenties), but I tend to learn more from a coherently-organized message than from a random jumble of statistics and facts. Cringely jumps from a detailed description of the KLAT2 and its innovative networking technology to a brief description of UWB. And then it's over.
Maybe I'm missing something.
Re:easy cowboy (Score:1)
Re:easy cowboy (Score:2, Informative)
If you wanna check out a sizable collection of .PDF's on the subject of Ultra Wide Band, uwb.org has some
links here [aetherwire.com].
Re:easy cowboy (Score:2)
Yes, he is actually going to try to build this thing, and he is going to document and post his progress as well as every single technical snag and kludgey solution.
I can hardly wait!
Now what... (Score:1, Redundant)
Running Super-SETI at home, claiming to be the greatest contributor when they really find ET?
Running Super-Quake with all the transparant cheat-code on without a slight jitter?
Rendering MSN frontpage in less than a second, with Mozilla?
Any better idea?
Re:Now what... (Score:2)
So now that we have a cheap supercomputer, all we need is cheap software.
Re:Now what... (Score:1)
Just do the following:
1) start the program going
2) run terminal
3) type top and hit return
4) look for the pid number for your process
(this is the pidnumber in step 6)
5) hit control-c
6) type sudo renice -16 pidnumber
7) enter the administration password
8) watch the time needed drop
Re:Now what... (Score:1)
renice -16 `ps | grep <insert app name here> | grep -v grep | awk '{ print $1 }'`
if you have a few apps, stick it in a loop like:
for i in `ps | grep <app name> | grep -v grep | awk '{ print $1 }'`; do renice -16 $i; done
(change 'renice' to 'kill', and you have my fav alais)
Re:Now what... (Score:2)
This movie project was my first experience with OSX, and the first real time I've spent with a Mac since I gave away my 7100 a few years ago... With so much control over the OS and a system that didn't crash on me once I think I'll be spending more time with it.
Re:Now what... (Score:2)
It was idle and had been running OS 9. I swiped it and installed 10 on it because I needed to produce a video in short order and didn't want to fuck around with installing a firewire card and Adobe Premier in my NT workstation.
Really iMovie seems to be a cool program. It was no problem to import the video, cut it up, resequence it, add transitions, sound, etc. The only problem I had was that the output for full-screen high quality was 4+ GB, and compressing it so that it would look good took me several tries of multiple-hour conversions.
Not Just A Supercomputer; Create A Super AI Mind (Score:2, Insightful)
What good is a supercomputer in your garage if you do not use it to maximize garage-holder value? If you provide supercomputer habitat for the progeny and supercomputer embodiment of the JavaScript AI Mind, [sourceforge.net] which has also been coded in Forth as Mind.Forth Robot AI, [scn.org] then your home-sweet-home garage will be a major waystation on the road to the Technological Singularity. [caltech.edu]
Just as the Shroedinger Equations for atomic bombs and such were developed seventy-five years ago when Erwin Schroedinger spent his 1926 Christmas vacation holed up in the Swiss Alps and working out a few mathematical formulas that shook the world, nowadays over the 2001 Yuletide there have been the first stirrings of True AI in the JavaScript AI Mind, [sourceforge.net] which any garage tinkerer may adapt for either 'pert near all-powerful supercomputer AI or a killer-app if not killer robot. [mit.edu]
Following in the footsteps of the giants who created Visual Basic Mind.VB [virtualentity.com] and Java-based Mind.JAVA, [angelfire.com] be the first on your block to create the supercomputer-based Garage-Mind.
What is 10base-100? (Score:1)
Here's the qoute "And fast Ethernet (10base-100) costs about three percent of gigabit Ethernet on a per-card basis, so using four cards per PC still saves 88 percent."
Google results for 10base-100
- Results 1 - 10 of about 81.
Google results for 100base-T
- Results 1 - 10 of about 64,800
Re:What is 10base-100? (Score:3, Informative)
Btw, a little nitpick, the TX would refer to 4 pair (all 8 conductors), 10base-T uses 2 pair, 10base-TX and 100base-TX use all 4 pairs.
Re:What is 10base-100? (Score:2)
Re:What is 10base-100? (Score:2)
case screws (Score:1)
You ediot! (Score:2, Troll)
Sheesh...!
Re:You ediot! (Score:1)
Closing statement. (Score:1)
I wonder if he's referring to Stahn Mooney's wife from Rudy Rucker's *ware novels...
/. needs a Cringley icon, any suggestions? (Score:4, Funny)
Maybe they could set things up so that ALL his articles hit the main page as soon as he posts them.
If this were the case he could put a "discuss this article" link on his page and simply link to /.
Old news (Score:4, Offtopic)
Still though, after having to wallow through Cringley's painful lack of comprehension of basic technical knowledge, reading the ArsTechnica piece again was quite refreshing.
Re:Old news (Score:2)
Re:Old news (Score:2)
Look on the bright side: at some point in the future when your relatives bother you for help with computer problems, the problems might actually be interesting. Instead of wondering why Windows has eaten Uncle Bob's resume, they'll wonder why there's an anomalous 6ms latency on node 4 and want you to help them figure out whether the problem is related to cable shielding degradation or whether there's a subtle error in the routing algorithm...
The Ultra Wide Band Working Group (UWBWG) (Score:2, Informative)
the ignorant are easily amused (Score:5, Insightful)
Re:the ignorant are easily amused (Score:4, Insightful)
As for the algorithm everyone is talking about. there are some versions which can return a pattern in a second or two on a slow celeron. then there are some version which are designed optimized for certain datasets which take time to run. but generally, you don't need a supercomputer to design a fnn. even with 64+ nodes.
Re:the ignorant are easily amused (Score:4, Insightful)
Anyone can build a machine with a really high processing performance. Just by a few thousand X boxes and plug them into the same ethernet cable. The real issue is how much communications bandwidth you have between the CPUs. Some problems require almost none - the 'trivial parallelism' problems like DEScrack and the mandelbrot set. In the 1980s we had a machine that had 1000 20MHz processors that could bang out mandelbrot sets like anything (using the goofy algorithm, not the modern optimizations). But is wasn't much use for anything else.
The problem with competitions for supercomputers is that they rarely measure the communication bandwidth because (a) its hard to do and (b) the effect on performance is highly algorithm dependent.
As for the KLAT's ingenious topology, I once did some research in the area myself when it was the fashion. I tried using minimum diameter graphs which should in theory have been better than a plain taurus. However as with Bill Dally at Cal Tech I concluded that the additional cost of exotic topology (more than double the price) was not really justified by the performance advantage (about 10-30% on a good day).
Certainly the many companies that set up to build transputer based processing clusters with high performance switches inside did not seem to go anywhere much.
Using a high performance router at the core of a processing cluster might be interesting. They are pretty cheap these days and are headed cheaper.
Supercomputing? Why bother. (Score:5, Insightful)
Speaking as someone who, yes, has actually worked with the big iron...
Why bother. Remember, Moore's Law is still in effect. Recently, we've hit the point in the curve where supercomputers are no longer needed, nor cost-effective. That is, the time it takes for the industry to deliver a far superior product has eclipsed the average lifespan of your typical supercomputer.
We're living in an age where a single graphing calculator you can buy at Walgreens has more horsepower under the hood than what got us to the moon 30 years ago. Your $2700 PC will be worth $150 within 3 years.
Having a supercomputer in every garage makes about as much sense as taking a rocket fuel-powered dragster to the supermarket for a gallon of milk.
Cheers,
Re:Supercomputing? Why bother. (Score:1)
sorry to be a pisser but wouldn't that analogy fall flat considering that advances in land speed vs computation speeds are highly different? the question I'm wandering is this, "What the fuck are we supposed to do with all this processing power" In other words what is the killer app that would use this? I can only come up with artificial intelligence used for slave like tasks around the home + generic latest whiz bang entertainment. Does anybody else know of anything?
Re:Supercomputing? Why bother. (Score:3, Interesting)
Plenty of reasons (Score:2)
But isn't the point of these kind of projects to derive more computing power in a generic form, something useful to many situations?
Sure, my Athlon isn't too slow at the piddly little hobbyist 3d rendering stuff I play with, but what if I suddenly get grandiose dreams of 3D worlds, wouldn't it be nice if I could divert the down payment for a house and move myself a year or two farther along Moore's timeline?
I can think of some small business applications where a nice quick video compression would be nice, especially if the hardware and software were all generic enough to buy off the shelf without a serious outlay of cash. Granted, there are very nice and very fast hardware codecs but then what if that same small business wanted to render some 3D along with that video stream? Or I'm working for them and get permission to render my VR opus overnight?
What about applications that could be enabled by cheap and standardized GFLOPs? If you can't think of any you're not thinking hard enough.
Two Reasons (Score:3, Interesting)
Two: Ever seen the stuff they run on supercomputers today? Simulating a supernova for 1 nano second can take a month of CPU time on some of the world's fastest supercomputers. Oh, its still very nessesary. If the past is any indication of the future, we will always need blazing fast machines to push the limits in the scientific world.
I assume you mean big iron as in mainframe, which is NOT a supercomputer by any means. Mainframes do the work that runs this world, supercomputers help us discover what we'll do in tommorow's world. They are very different worlds.
Re:Supercomputing? Why bother. (Score:4, Interesting)
Re:Supercomputing? Why bother. (Score:3, Insightful)
What's Moore's Law got to do with this? This is more the area of Murphy's Law, I think. As for why bother, heck, I don't know: because I can. When I had a 286 PC, it did everything I wanted it to do at the time, why did I need a 386? My 386 was dandy, what was the benefit of having a 486? My trusty 486 was quite fast at the time: was the premium price of a Pentium worth it?
Stuff happened! People thought up new applications for newer and faster machines, and then we couldn't do without them. Remember when your average machine could push out 5 frames per second of 160x120 video, tops? I remember when encrypting a 26k text file took almost a minute, each. Back in the day I didn't think I'd be watching DVD videos on my desktop or laptop PC: who'd want to, that's what TVs were for!
Years and years ago I had a program that simulated stellar interaction in small globular clusters. A few hundred stars pushed a 086 as far as it would go and it was still an overnight crunch to simulate much interaction. I kinda gave up on it after a while: other interests, etc. I think about it occasionally, wondering when that sort of stuff will get commoditized to the point where I can take a look at it again without having to pull away from current projects for six months. Not quite there yet, I think, but gettin' close, gettin' mighty close... :)
Re:Supercomputing? Why bother. (Score:5, Interesting)
The machine I worked on in the early 90s is still in the top 100 of the supercomputer charts (or would be if the compilers knew about it).
While a desktop Cray-1 can now be had at commodity prices the machine is now two decades old. The obsolescence rate is nowhere near as giddy as some would claim.
The really big iron tends to have a lifespan of about five years and is typically retired because the power consumption and maintenance costs favor a move to newer hardware. True supercomputers rarely fall victim to Moore's law. Even the KLAC machine discussed only barely qualifies as a supercomputer, 64 processors is at the low end of the scale. People have Web servers with that number of CPUs. True big iron starts with a few hundred processors and goes up to the tens of thousand.
If by working on the big iron you merely mean you used to use IBM 3090 class machines, then the joke is on you, those machines were often obsolete before they were manufactured. When I worked at one lab I had a desktop machine (first production run Alpha) that was considerably more powerful than the CPUs of the just-installed campus mainframe.
Fact is that many of the people buying 'big iron' in the 1980s and 1990s were incompetent. They bought machines that ran the O/S they knew, which often meant they bought obsolete IBM mainframes for applications where a ntwork of IBM PCs would have served far better. I spent quite a bit of time in institutions where wrestling control of the computing budget from an incompetent IT dept was a major issue. In fact the World Wide Web began at CERN in part as a result of such a struggle. Tim, bless him wanted the physicists to switch from the IBM mainframe CERN VM to use NeXTStep machines. One of the schemes that the CERN CN division had cooked up to force people to use the mainframe was to only make information such as the address book available on the IBM mainframe. Attempts to make it more widely available were treated much the same way that Napster was treated by the RIAA. The Web took off at CERN initially because you could access the address book from a workstation or from the VAX.
Very few mainframes were actually designed to provide fast processing. The IBM 3090 series was actually designed to perform transaction processing for banks. As a scientific CPU it offered tepid performance at a price arround 100 to 500 times the price of a high power workstation.
There are certain applications in which CPU cycles are still the limiting factor. Admittedly they are much smaller as a proportion of the whole than they were 10 years ago.
Re:Supercomputing? Why bother. (Score:2)
I know this is completely off topic, but Travelocity (the travel web site, you know) has lots and lots and lots of SGI Origin systems for running their front-end app-- it does session management and HTML generation, and passes data back and forth from the user to the database, so it's basically just a web server.
I've lost count, but I know they've been buying at least one 32-processor system per quarter for several years now. And, if I remember right, they recently bought something like four 32-proc Origin 3000 systems, too.
So, yeah, they've got a hell of a big web server.
Re:Supercomputing? Why bother. (Score:2)
However I disagree on your assement of the reliability and security of the beasts. Used in a general computing environment the series is notable for its fragility. If on the other hand you only use the machine for one task, then the simpler the better and lacking almost every feature you would reasonably expect in an O/S MVS is a great choice, but by the same measure so is MSDOS.
Comparison to UNIX merely shows how far we have sunk. UNIX has never been a secure or a reliable O/S on the measures relevant to financial services. Even today if you want to run something like a chemical plant or a nuclear power station you use VMS.
Re:Supercomputing? Why bother. (Score:2)
As someone with their own supercomputer (ACME [purdue.edu] and /. of 6/6/2000 [slashdot.org]) I can say that you'll come up with a bunch of things you would like to do but haven't found the CPU time to do. This of course presumes that you have half a brain.
We run NP complete problems to completion. Our idle loop is a prime number factoring of one of the RSA challenge numbers. If we were to hit one of those numbers (even the $10k one) we'd more than pay for the machine (but not the A/C or power).
I do ponder what a typical PBS.org [pbs.org] reader would do with their own supercomputer. Most lack the sophistication to get a return on investment on even just the air conditioning and electricity better yet the cost of the hardware and the set up. But what do you expect from someone who practices identity theft [wired.com]?
All that said, it is having this class of power out in the hands of the masses that could well bring the next BIG NEW IDEA. It is neat that it can be done and I hope a bunch of /.ers write the code they want to run on such a thing then build one to run it.
-- Multics
Oh no! Someone stole Peter Pan's identity! (Score:2, Interesting)
How exactly do you steal the identity of someone who never existed? The man we know as Robert X. Cringly was the Infoworld Cringly for 8 years! I'd say he pretty much defined who that Cringly was (or is today, I don't read Infoworld.) Saying he practices identity theft would be a valid argument if the Infoworld Cringly was someone else and he had just appropriated the name for use on PBS, but he didn't. He built up the Infoworld Cringly and so I believe he has a right to go on with the persona he's used for all this time.
nah... (Score:3, Offtopic)
Imagining a cluster of TiBooks now... (Score:1, Troll)
$1,499 for a 600MHz iBook, 20 of these would cost ~$30k, but you couldn't use the channel bonding concept, unfortunately. You'd be stuck with 100bT, which would probably get swamped with any real work in a 4 iBook per switch, 6 switch topology... without even trying to minimize latency.
20 iBooks would also take up about
8x9.1x11.2 per stack, so all 5 stacks would take up about 40 inches in space... You could stick these next to a desk or bed and use it as an end table! Okay, that'd be a tall end table...
$2,999 for a 667MHz ToBook, 20 of these would cost ~$60k, but these *are* Gigabit capable! In a similar topology, or perhaps because of prices for Gigabit switches, you might as well use one switch. Who knows?
Of course heat is even more of an issue, but give n the same space as the iBooks, there's a whole extra half inch of space available to the TiBook!
40x9.4x13.5 inches! It would even make a good space heater!
Okay, okay, I know, it's damn expensive. But... consider, how much is a 20 CPU machine from HP or IBM? I know, I know, they tackle different uses, like reliability, uptime, IO throughput, etc. A 4way 680 pServer from IBM is $220k, from their own website
Damn... I wonder when Apple is going to release a thin rackmount slab server?
Re:Imagining a cluster of TiBooks now... (Score:2)
Even more interesting ideas. (Score:3, Interesting)
Not only could you hook them together using gigabit ethernet, you could take advantage of the firewire port as well, perhaps chaining them together with some sort of SAN, though you are still limited by the ~50MBps, though perhaps that's not useless, I don't know.
Still, with the ram bay you could up the memory from 1GB to something crazy, like 16GB. The battery is useful as a backup-emergency device, allowing the slab to run for about 4 hours in case of emergency (woo!).
You could even concievably netboot the thing, since OS X allows for that, right? Minimize the hard drive or get rid of it altogether... you could seriously make a slab about the size of 1/2" by 8" by 8" I suspect
Talk about power! (Score:2, Informative)
The costs of a clustering setup go well beyond the initial hardware. At the level that Cringely is building (with only 6 machines), it may not be a huge problem, but running KLAT2 will cost you some dough just for the power.
A couple years ago I made a dumb mistake and bought a saltwater reef tank without realizing that it would end up costing me $150/mo. in electricity bills (it ain't cheap running 4000+ watts in lights and pumps 18 hours a day). I'm sure running 66 machines 24 hours a day ain't cheap either.
Re:Talk about power! (Score:2, Informative)
I live in Quebec where electricity is the cheapest in the world costing about 6 to 7 cents (CAN) per kWh. I don't know how much it is in the US.
so I have
28.50$ a day in the worst case might be a bit pricy for a household but it is cheap for a university. Of course, electricitry is much more expensive in the us, I have seen prices of 0.14$/kWh in New-York many years ago but the 300W power supply is probably not being fully used making it cheaper.
Re:Talk about power! (Score:2)
Still, here [caltech.edu] someone has take the time to measure the power usage of a 500MHz G3 powerbook. 17.54W under full load! A G4 is a couple W more expensive than a G3, but that's probably offset by the fact that the LCDs won't be powering on at all.
So 66 PCs at 75W sucks up $7 a day.
66 PowerBooks at 15W sucks up $1.50 a day. A month means $210 vs $45, and a year means $2,555 vs $547.5
Of course, the notebooks do cost more than the $2k delta, but 66 iBooks is a lot cooler, niftier, and compact than 66 PCs
You could probably stick it in the corner and use it as a space heater.
This is silly (Score:3, Funny)
Also, if everybody had a supercomputer in their garage, they would no longer be so "super."
If those fans fail... (Score:1, Offtopic)
...you'll be lookin' at a whole lotta Kentucky Fried Penguin!
My Hank Dietz (creator of KLAT2) story (Score:4, Interesting)
Dr. Dietz used to teach at Purdue, and I had the good fortune to take a compiler course taught by him. On the first day, when introducing himself, he came to the part where he was describing how to get into contact with him. When giving out his phone number (at Purdue, on-campus numbers were 5 digits long) he mentioed that his phone number was "GEEKS". He added, "No, I didn't ask for GEEKS, but when I figured it out, I thought it was pretty cool."
Needless to say, it was a pretty cool course.
Wow! (Score:1, Flamebait)
...I'll get me hat...
A real supercomputer? Not exactly (Score:5, Insightful)
Clusters are great for embarassingly parallel applications (ie ones that have threads which don't communicate with each other much. This includes things like SETI@home and batch rendering of images. What they don't compare on is applications that communicate a lot like nuclear physics simulations. This is not to say that that will never change in the future, but for the time being it's still true.
Last, and certainly not least, real supercomputers have memory bandwidth that can match the speed of the processor. A Cray or an SGI Origin has an absolutely massive amount of bandwith from the processor to local memory compared to a PC. That allwos a traditional supercomputer to actually *achieve* the fantastic peak performance numbers. On many applications, the working sets are huge and don't fit in cache so you end up relying on memory being fast. On a PC, it's not and I've heard from sources I consider reliable (though I have no actual numbers to back this up so it may be rumor only) that one large cluster site sees around 10% or less of peak on a cluster for a nuclear physics simulation, whereas, on a vector Cray, you can hit ~80% of peak. This means that the cluster has to be 8 times more powerful and when you start multiplying the costs by 8, they start looking like the same price as a real supercomputer.
So my point is that building a real supercomputer does not mean grabbing a bunch of off-the-shelf components, slapping them together with a decent network and running Beowulf (or a similar product).
A real supercomputer? Yes, exactly (Score:4, Interesting)
Yes, these systems are not sometimes the best for handling vectorizable jobs, but they are so inexpensive compared to the old specialized hardware that it is easier to waste cycles than build special hardware.
As to memory bandwidth. Modern CPU caches make the question nearly moot.
If all of this were not true, then people wouldn't be building clusters and the majority of the top500 list wouldn't be dominated by clusters. Instead there are 3 traditional architecture machines in the top 20. This is the reason that Cray (etal) no longer dominates the marketplace... commodity systems have overtaken nearly all of the specialized hardware world.
-- Multics
Re:A real supercomputer? Yes, exactly (Score:2)
This is simply not true. Your other points are pretty wacked, too, but I'll take this one because I have personal experience.
I have some image processing code that runs on IRIX, and I recently did a shoot-out between an Origin 2000 and an Origin 3000. Both machines had eight 400 MHz R12000 processors with 8 MB of secondary cache and 4 GB of RAM, and both were equivalently equipped for disk.
The Origin 3000 was almost twice as fast as the 2000 was, with identical CPUs, memory, and disk. (The actual numbers are on a spreadsheet at the office, unfortunately.) The difference? Memory and interprocessor bandwidth. The Origin 3000 platform has a specified memory bandwidth of about 2.5 times the bandwidth of the Origin 2000.
The test involved taking a big multispectral image, splitting it up into tiles, handing each tile off to a thread, and doing some processing on the tiles. The data set was pretty huge, but not so big that it couldn't be cached entirely in RAM, so the first step was to load the whole thing into memory. But for the actual test run, there was a lot of fetch-operate-fetch, which really exercised the memory bandwidth of the system.
So your comment about memory bandwidth being moot is completely off base.
Re:Use DDR RAM (Score:2)
Crossbar-style system interconnects are not new ideas. I'm not an authority on the subject, but I know that the Cray Y-MP had a 32-port switch architecture that provided about 1.3 GB per second of memory bandwidth per processor (hope I'm remembering these numbers right!)
The DEC VAX 9000 series had a 1 GB/second CPU-to-memory pathway that utilized a crossbar switch, also.
Both of these systems were in wide use around 1990, give or take a few years. And, of course, the ideas go back much further than that. I used to have a copy of a paper by Wulf in Communications of the ACM dated 1974 that described a switch-based multiprocessor system. Can't find it right now, alas.
Things have come a long way. From 1 GB/sec aggregate in 1990 to 22 GB/sec aggregate in 1998 (the Cray SV1) to 40 GB/sec aggregate in 2001 (the SV1ex). The SV1ex provides each processor with 6.4 GB/sec of bandwidth into and out of main memory.
Increasing the speed of the RAM isn't the issue-- the SV1ex uses commodity SDRAM. The issue is building sufficiently large parallel paths for the memory controllers to execute very large parallel fetches into a vector cache.
So I guess you could say that you're headed in the right direction, but you've got a long way to go.
Re:A real supercomputer? Yes, exactly (Score:2)
As for cycles wasted/cost, that is going to depend on the applications involved. At some point, the sheer cost of the power wasted is going to be a factor. Obviously not on a garage built six node cluster, but if you start talking about 2048p the power *will* be an issue.
The IBM SP, while being *mostly* commodity uses some non-commodity parts and has a lot of proprietary software to make it work.
CPU caches, modern or otherwise, are not an issue with an application that has, say, a 1 gigabyte working set. It simply doesn't fit in the cache no matter what you do. You can restructure loops to make things better, but you're still going to be banging on memory.
You're right, commodity systems have overtaken a lot of areas that used to require traditional supercomputers, but then, the market for traditional-architecture supercomputers has *never* been big.
"Ultra Wide Band" - not (Score:2)
The FCC is being very cautious about mass-market UWB products. Since these things blither over a gigahertz or so of spectrum, they overlap with other services. At low power, a few of these things are probably OK, but in bulk, there could be trouble. The concern is that mass deployment could wipe out other services in congested areas.
QNX? Hey Cringely... (Score:3, Funny)
If you're looking for software that almost works, I know of an OS that might fit your needs. You're not going to hook this thing up to the Internet, though, are you?
Re:QNX? Hey Cringely... (Score:2)
Uh, Cringely, wouldn't creating the thing and then using it as the subject of an article for the company that employs you count as a commercial purpose?
You don't really expect QNX to bitch about a little free advertising do you?..
only fastest one percent is supercomputer (Score:3, Informative)
Uses for mini-cluster (Score:2, Funny)
2) Top 10 in Seti@home
3) Porno-ize you favorite anime (Final Fantasy anyone?)
4) Why are you reading this? I thought you were doing #3
Re:And I want $10M in every pocketbook... (Score:2, Interesting)
I was in the office of a research company and the owner showed me their shiny new minicomputer. I can't remember what kind it was, unfortunately.
He said something then that struck me as very insightful and I've not forgotten it to this day.
"You know, minicomputers are looking more like micros every day."
Re:D-Link sells Gigabit NICs for cheap (Score:2)
Wouldn't help? (Score:2)
Re:D-Link sells Gigabit NICs for cheap (Score:2)
I know D-Link's PCMCIA 100BaseTX cards are 16-bit, so while they will signal at 100MB/sec, their throughput is not any more than (as far as I can see) than you would get from an old desktop NE2000 adapter. Low end network hardware frequently pulls this kind of stunt -- repackage old technology so that it will look like it should perform better than it actually can.