Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Space Hardware Science

Simulating the Universe with a zBox 192

An anonymous reader writes "Scientists at the University of Zurich predict that our galaxy is filled with a quadrillion clouds of dark matter with the mass of the Earth and size of the solar system. The results in this weeks journal Nature, also covered in Astronomy magazine, were made using a six month calculation on hundreds of processors of a self-built supercomputer, the zBox. This novel machine is a high density cube of processors cooled by a central airflow system. I like the initial back of an envelope design. Apparently, one of these ghostly dark matter haloes passes through the solar system every few thousand years leaving a trail of high energy gamma ray photons."
This discussion has been archived. No new comments can be posted.

Simulating the Universe with a zBox

Comments Filter:
  • Hmm (Score:4, Funny)

    by SilentChris ( 452960 ) on Saturday January 29, 2005 @02:34PM (#11514887) Homepage
    Looks like MS will need to come up with a new name for the Xbox 3.
  • by Anonymous Coward on Saturday January 29, 2005 @02:34PM (#11514895)
    At the center of the box is a small piece of fairy cake
  • by Anonymous Coward
    If people can't predict the weather reliably, how on earth is anyone able to predict anything about the way the universe operates?
    • by Sique ( 173459 ) on Saturday January 29, 2005 @02:50PM (#11514978) Homepage
      The problem in question is the number of distinguishable bodies. With weather you would have to go down to the single molecule in the air, to get a quite good prediction. In fact current weather models use cubes of air where the conditions are considered constant (same temperature, same pressure, same direction of air flow in the same cube) and take them as distinguishable bodies. Those models are a compromise between the sheer number of necessary elements, the number crunching limits of current calculation hardware and the difference between the used model and the reality.

      With stellar bodies it's much more easy. The number of stellar bodies you need for a prediction is much smaller, the bodies themself can be considered almost constant for the whole calculation etc.pp. With the number crunching capacity of today's weather prediction centers you can simulate whole galaxies (if you consider stars constant, which they mainly are for about 10mio to 10bio years, depending on their mass). With the differences between your model and the measured reality you can spot elements you didn't simulate yet and add them to your model. The swiss team now was simulating clouds of about the mass of the earth and the size of the solar system and found that those added to the stellar simulation made a quite good fit to the measured data.
      • I'm still waiting for a simulation engine that models the subatomic and atomic particles' behaviors.

        Basically, I want a program that simulates chemical reactions. If I have a bunch of molecules mixed together, and I add another mixture, what will happen, on the atomic level?

        We have SPICE for electrical circuits. Why not something for chemical reactions?
        • by Anonymous Coward
          If I remember correctly, chemical reactions happen at the atomic level, not subatomic.
          Here is a GPL program, there are plenty of others (commercial and FOSS):
          http://ruby.chemie.uni-freiburg.de/~martin/chemtoo l/chemtool.html [uni-freiburg.de]
          • by orangesquid ( 79734 ) <orangesquid@ya h o o . com> on Saturday January 29, 2005 @05:47PM (#11516060) Homepage Journal
            Well, you'd have to have a capture/drawing tool like Chemtool, and then something that could approximate polarity, electrical charge distribution, and bond length/strength. (Those involve things like electron orbitals, hence the subatomic.) Next, you'd have to have something that handles movement of fluids and gases with respect to the temperature, pressure, etc (gas laws and the partial diff. eqns. whose exact solution is one of the Clay instutite Millenium problems). Then, you'd have to have something that will predict what happens, probabilistically, when two or more molecules interact. These interactions would have to modeled in terms of molecular collisions, so that things like titration, stirring, etc, would be accurate.

            Finally, you'd have something which would prepare an "answer" to each problem by waiting for a reasonable amount of precipitate to settle, or measuring pH, or simulating a gas chromatograph of the contents of the beaker.

            Other helpful things would be crystallization and such. I would think that if you could simulate the physical laws and properties at a sufficiently low level, most things would arise automatically, but IANAC.
            • Depending on what exactly you're trying to figure out, you might not even need to simulate multiple molecules. If you just want to know if two relatively small organic compounds will react to form other organic compounds, you could probably at this point in time do it with a quantum chemistry package like MPQC (among others). Ghemical (http://www.uku.fi/~thassine/ghemical/) will interface with those. But I should warn you, even calculating a 10-atom system with dynamics will take ages. But this is pretty mu
      • by Anonymous Coward
        "The problem in question is the number of distinguishable bodies. With weather you would have to go down to the single molecule in the air, to get a quite good prediction"

        This is utter nonsense.

        Weather models are fluid models, not particle models like the N-body simulation described here. They are quite different, and require different computational approaches. Both are numerically intensive, however.
    • If you were trying to predict rain in the next six months it would be a lot easier than predicting it with any real useful accuracy.

      It's the difference between saying it does rain, and when it will. On this scale they are just explaining a phenomena that can happen every so often, in a stellar sense. I'm guessing this eases the difficulty of computation from what would be necessary to predict the number of years before the next occurrence.
  • This looks like the processor set up for the new pentium mentioned in an earlier article. This sucker uses more power than my house!
  • by Anonymous Coward on Saturday January 29, 2005 @02:36PM (#11514910)
    zBox has been slashdotted, vat do ve do now? Ve relax and eat some cheese and chocolate, jawohl!
  • by Anonymous Coward on Saturday January 29, 2005 @02:37PM (#11514916)
    [Picture of loads a wires in what looks like a thousand desktops interlinked]

    Our in-house designed (Joachim Stadel & Ben Moore 2003), massively parallel supercomputer for running our cosmological N-body simulations. This machine consists of 288 AMD Athlon-MP 2200+ (1.8 GHz) CPUs within a few cubic meters. Under load it produces about 45 kW of heat, about equivalent to 45 electric hair dryers operating continuously! This amount of heat, combined with the extremely high density necessitated a new design for efficient cooling. The 144 nodes (2 CPUs per node) are connected using an SCI fast interconnect supplied by Dolphin in a 12x12 2-dimensional torus. The layout of the machine is ring-like, thereby allowing very short "ribbon" cables to be used between the nodes. This fast interconnect network attains a peak bisection bandwidth of 96 Gbits/sec, with a node-node write/read latency as low as 1.5/3.5 microseconds. Additionally the zBox has 11.5 TBytes of disk (80 GBytes/node) and 3 Gbits/s I/O bandwidth to a frontend server with 7 TB of RAID-5 storage. This is among the fastest parallel computers in the world! At "first light" it ranked in the top 100, but the technology advances quickly. (see top500, June 2003: Rank 144) (see top500, November 2003: Rank 276)

    We greatly acknowledge the aid of the Physics Mechanical Workshop at the University of Zurich for: 1) turning the "napkin-sketch" into a proper CAD/CAM design of the machine; 2) providing numerous suggestions which improved the detailed design; 3) providing a gigantic room for the construction of the boards; 4) and, well, building the thing! We thank the companies of Dolphin (dolphinics.com) for supplying the high speed network and COBOLT Netservices for supplying the majority of parts. We would like to especially thank the individuals: Doug Potter and Simen Timian Thoresen for their great help in setting up the linux kernel and root file system, getting netbooting to work correctly, and resolving several operating system related problems. Finally we thank all who helped in the construction of the zBox (assembly of boards, etc), Tracy Ewen, Juerg Diemand, Chiara Mastropietro, Tobias Kaufman
  • by chiph ( 523845 ) on Saturday January 29, 2005 @02:39PM (#11514925)
    ...you just slashdotted Switzerland. Who's next, tough guy? Andorra? [andorra.ad]

    Chip H.
  • by StupendousMan ( 69768 ) on Saturday January 29, 2005 @02:42PM (#11514940) Homepage
    You can read the entire paper in PDF or PS at astro-ph, a web site which collects preprints in the physical sciences. See

    http://xxx.lanl.gov/abs/astro-ph/0501589

    I read the paper quickly. The authors have to come up with a model which has virtually no observable consequences (otherwise, we would have seen this source of matter by now), but which can also be tested experimentally in the not-too-distant-future (or else it wouldn't be science). They predict that some of the cosmic-ray shower telescopes may be able to detect the little cloudlets of dark matter. We'll see.

    • by Anonymous Coward
      We'll see.

      Only if we can shed some light on the matter.
    • but which can also be tested experimentally in the not-too-distant-future (or else it wouldn't be science)

      There are plenty of events and areas of study which aren't directly experimentally verifiable but which are considered science. Like evolutionary biology and big bang cosmology. Science is not as easy to define as most people (including most /.ers) like to imagine. If it were, the philosophy of science wouldn't be a very interesting discipline.
      • There are plenty of events and areas of study which aren't directly experimentally verifiable but which are considered science. Like evolutionary biology and big bang cosmology.

        Both of which contain some testable statements (e.g. in cosmology, inflation predicts certain properties in the microwave background on specific angular scales), and some untestable statements. Scientists (ought to) ignore the latter.

        Science is not as easy to define as most people (including most /.ers) like to imagine. I


        • I'm a practicing scientist and find philosophy of science both interesting and irritating. It's interesting (and important) because questions like, "What makes a good explanation?" aren't quite part of science but still need to be asked. It's irritating because philosphers are by-and-large such complete and utter wankers.

          --Tom
  • So one of these dark matter clouds may pass through the solar system every few thousand years? Have they taken the next step and hypothesized that such an event could account for major climate changes? Like the event that killed off the dinosaurs?

    • by flyingsquid ( 813711 ) on Saturday January 29, 2005 @02:51PM (#11514985)
      So one of these dark matter clouds may pass through the solar system every few thousand years? Have they taken the next step and hypothesized that such an event could account for major climate changes? Like the event that killed off the dinosaurs?

      It'd be interesting if these things could be tied to mass extinctions, but these occur much more rarely than every few thousand years. And unless these clouds can account for high levels of iridium, shocked quartz, melt glass, and a hundred-mile impact crater in Mexico, it's not terribly likely they account for the dinosaur extinction.

    • or maybe that the increased radiation leads to an increase in mutations that speed up evolution?
    • Since "dark matter" is per definition electromagnetically weakly interacting, such hypothesis wouldn't stand a chance.
    • Have they taken the next step and hypothesized that such an event could account for major climate changes? Like the event that killed off the dinosaurs?

      Well ... asteroids don't give off a lot of light :)

      Is there any reason why dark matter has to be exotic? My layman's instinct would be that for every star we see in the sky, there must be a large number of jupiter-sized balls of gas and debris that never managed to accumulate enough mass to ignite. Would we even be able to detect these or get an esti

      • by Anonymous Coward
        There are searches for such types of objects (called MACHOs - Massive Astrophysical Compact Halo Objects), usually by searching for gravitational lensing. To account for the dark matter, there would have to be trillions of these objects in the galaxy and they will ocassionally pass in front of other stars (or galaxies). When that happens, the bending of light from the gravitational well of the MACHO can cause the background star to become brighter. Several experiments are currently searching for this bri
  • Hrmm... The next time one of these "ghostly dark matter haloes" comes around, can we launch every copy of "Gigli" into it?

    Or maybe they could use that supercomputer to calculate how much fuel it would take to launch these dreaded things into the sun.

    Well... it's just a thought.

  • Photo Story (Score:3, Informative)

    by gustgr ( 695173 ) <(gustgr) (at) (gmail.com)> on Saturday January 29, 2005 @02:44PM (#11514946)
    http://krone.physik.unizh.ch/~stadel/zBox/story.ht ml [unizh.ch]

    The 3D temperature monitor is really cool.
  • Mirror (Score:5, Informative)

    by Rufus211 ( 221883 ) <rufus-slashdot@h ... g ['ack' in gap]> on Saturday January 29, 2005 @02:45PM (#11514953) Homepage
    Maybe they should have use the zbox to host their site =)

    http://rufus.hackish.org/~rufus/mirror/krone.physi k.unizh.ch/~stadel/zBox/ [hackish.org]
  • by Doverite ( 720459 ) on Saturday January 29, 2005 @02:45PM (#11514954)
    All it keeps saying is 42...42...42...42...
  • ... does it run SkyOS [skyos.org]?
  • by deft ( 253558 ) on Saturday January 29, 2005 @02:51PM (#11514988) Homepage
    but then I read in the article:
    "and had sufficient forced air through the heat exchangers to transport the heat from a small car out of this small room."

    Suprising.
  • Astronomers from Tacoma to Vladivostok have just reported an ionic disturbance in the vicinity of the Van Allen Belt. Scientists are recommending that necessary precautions be taken.
  • by tengwar ( 600847 ) <slashdot@vetinar ... rg minus painter> on Saturday January 29, 2005 @02:57PM (#11515024)
    I wonder if anyone can answer a naive question for me. As I understand it, the solar wind consists of charged particles moving outwards from the sun. (a) Do these have a net charge? (b) If so, does this mean that there is a net movement of charge outwards from the galaxy?

    The reason I'm interested is that a non-neutral charge distribution would tend to attract the outer part of the galaxy towards the centre more than would be expected from gravity alone, which is (simplistically) the evidence for dark matter / energy.

    • by chazR ( 41002 )

      As I understand it, the solar wind consists of charged particles moving outwards from the sun. (a) Do these have a net charge?


      No. There's no net charge. If one developed between the sun and the solar wind, the solar wind would fall straight back in.

      A good primer on dark energy can be found here [caltech.edu]
      • Thanks for the link, I'll follow up on that. However you say:

        No. There's no net charge. If one developed between the sun and the solar wind, the solar wind would fall straight back in.

        I have to say that that does not follow. Whether the particles fell back in would depend on their initial velocity and on the total charge of the sun. Do you have any other reason for saying that there is no net charge?

        • Whether the particles fell back in would depend on their initial velocity and on the total charge of the sun.

          Absolutely. However, the only force currently causing them to accelerate towards the sun (that is, slow down) is gravity. If there were a net charge between the sun and the solar wind, there would also be an electromagnetic effect. This would be many orders of magnitude greater than the gravitational component. Observation of the solar wind shows that this is not the case. (If it were the case, t

          • Yikes... doesn't that assume that the particles are propagating into a theoretical vacuum? which of course they do not.

            The particles propagate (acceleration mechanism still unknown) into a dynamic system that is so totally unknown it's not even funny. We have no idea at all how the low corona works, what the fields are (only theoretical estimates with large error bars), plasma interactions (many of which are probably not even on our charts), etc... The High corona is no exception. Interplanetary spac
    • Whether the ions in the solar wind worked as an attractive force towards the rest of the galaxy would depend on whether they had enough energy to move out that far. If they had enough energy, then at least that would give me an opening to advocate pushing gravity [everything2.com] again, which by the way is also a cool thing to simulate on a zBox.
  • by krang321 ( 854502 ) on Saturday January 29, 2005 @03:00PM (#11515037) Homepage
    "All 288 CPUs shipped by AMD worked perfectly and none needed to be replaced" My 500Mhz AMD works perfectly... as long as I use reliable software (Linux) not that other product - what's it called again... XPee?
  • Imagine a beowulf cluster of these things!
  • I'm glad to see some folks still doing things the old fashioned way, even if it was a couple years ago. One question that I have, if anyone is familiar with zBox. Why not go with 2 or 3 racks packed with commercial 1u 2CPU nodes? Was it cost, Heating/cooling? Perhaps it just wouldn't of had that "this is cool shit" factor...
    • Price? They built it themselves, so surely that's cheaper than getting it practically pre-built.
    • My guesses are: 1) Cost. A commercial 1U dual processor pizza box is actually very expensive for the computing power, compared to the do-it-yourself method. Of course, you're mostly paying for support, overhead, the brand name, and a sturdy, cool-looking case. 2) Cost. Commerical racks can be pretty pricey, too. 3) I/O speed. The zBox is wired up for good inter-CPU throughput, whereas you lose significant speed with the typical ethernet patchboard scheme you find in a commerical rackspace. Of these
    • Price is what it seems like. Way cheaper to build it yourself -- especially with help from the univerity's metal fab people -- than to just buy off the shelf rack-mount systems. It's a pretty straightforward design if you look at the pictures.
    • One question that I have, if anyone is familiar with zBox. Why not go with 2 or 3 racks packed with commercial 1u 2CPU nodes? Was it cost, Heating/cooling? Perhaps it just wouldn't of had that "this is cool shit" factor...

      Because you could get graduate students to work on it. Graduate students are sort of like slaves, except that you don't have to feed them like you do slaves.

    • The biggest reason for the design is the node interconnect speed. From the article:

      The 144 nodes (2 CPUs per node) are connected using an SCI fast interconnect supplied by Dolphin in a 12x12 2-dimensional torus. The layout of the machine is ring-like, thereby allowing very short "ribbon" cables to be used between the nodes. This fast interconnect network attains a peak bisection bandwidth of 96 Gbits/sec, with a node-node write/read latency as low as 1.5/3.5 microseconds.

      Commercial racks could not

  • how would this perform with new 2.6 linux kernel ..
    the one they are using seems pretty old suse distribution
    • The dates on the pages are from two years ago. They might very well be running something newer now.
    • Generally the applications you run on a supercomputer are dedicated (i.e. one process should get all of the CPU except when doing I/O and other kernel operations). Because of this, any scheduler should be able to do as well as any other, so there's no reason to change to a new kernel.

      Of course, changing the kernel might get you slightly better drivers and improve I/O performance, and perhaps memory allocation, but the linux 2.4 kernel was mature enough that I doubt there are any significant improvements f
  • by panurge ( 573432 ) on Saturday January 29, 2005 @03:14PM (#11515106)
    I actually thought myself a few months ago about putting a group of 4 HDs and 4 mobos on a large aluminium plate, placing in a wide, flat enclosure and feeding air in at the center and out via 4 peripheral ducts to build a 4-way unit that could sit under a set of office desks arranged roughly in a square. The benefit is that the hardware takes up zero usable desk space, is well protected from physical damage, and the under-desk air flow results in low noise. For high density offices (e.g. call centers) with all power and network connections feeding in to the center of the desk clusters, this could be a very efficient arrangement. It's nice to know I was beaten to it by a Swiss supercomputer.
  • You guys are screwed.
    From now on, I'm carrying a scorpion in my pocket!

    MUAHAHAHAHA
  • 6 Months? (Score:5, Interesting)

    by Dan East ( 318230 ) on Saturday January 29, 2005 @03:23PM (#11515142) Journal
    If the computer ran for 6 months straight using 1.8GHz processors, couldn't they have waited several months and utilized newer CPUs running at double the speed, halving the computation time?

    Regarding their design, I'm somewhat surprised they used an individual power supply for each board. It seems there would be more efficient and smaller power systems available that could power multiple boards at once. It looks like a quarter of the volume of the computer is comprised of power supplies. Plus all that extra heat is thrown into the mix too.

    Dan East
    • I'm no expert on homebrewing super computers, but I would think the reason they go with one power supply per cluster is for redundancy, so that it is very easy to swap out individual clusters with bad components.

      I would guess in these types of applications clean power is an absolute must which is another reason to use individual power supplies with more than enough juice on the rails to keep the CPU happy.
    • No, because for them to "profit" computationally, the new CPUs would have had to come out before 3 months.

      Sometimes the measure of efficiency is called "you have X Euros to spend on this project".

      While not technically the most efficient, if said mobos+PCU+Power supply cost $250, compared to utilizing a bunch of blade-like units that effectively cost $500 per CPU unit, then you go with the less "efficient" solution.

      Sort of like all those render farms we've seen pictures of, where they just have 1000 or so
    • Along with Forbman's comment I would also guess that doubling the CPU speed won't cut the time by half - other factors like bandwith also play a role. You also have to consider that newer CPUs will likely produce more heat and consume more power. The parts of the article I read suggest that they can't conduct much more heat.
    • It's Zeno's Paradox meets Moore's Theorem.
    • More or less. See the paper "The Effects of Moore's Law and Slacking on Large Computations".

      It's quite entertaining....

      http://xxx.lanl.gov/abs/astro-ph/?9912202 [lanl.gov]
    • Again, someone misses the fact that this cluster was done in 2003, when theren't any faster chips

      Even now, there aren't any chips that are twice as fast as a 1.8GHz Opteron. At the time, I think 2.0 was the max, and those get expensive, to the point where you are better off spending the money adding nodes than spending more per chip.
    • Re:6 Months? (Score:3, Informative)

      by ottffssent ( 18387 )
      If you pervert Moore's law into a statement of speed, you end up coming out ahead for any computation that

      1) is CPU-bound rather than interconnect-bound or disk-bound or memory-bound
      2) will take 3 years+ with current technology / budget, and
      3) produces no useful intermediate results

      At 3 years, you come out even buying current tech and running it for 3 years versus waiting 18 months and buying spending the same money on tech that can do the job in 18 months.

      There are few such computations. Note that the
      • Thanks, but your post is far too accurate to get modded up.
      • 1) is CPU-bound rather than interconnect-bound or disk-bound or memory-bound

        Actually, Moore's law will probably help out with memory access speed. Also, if you're interconnect-bound Moore's law will allow you to keep total CPU power constant while reducing the number of CPUs, and consequently the number of interconnects.

        2) will take 3 years+ with current technology / budget, and

        I'll definitely agree there... Moore's law isn't THAT fast...
    • If the computer ran for 6 months straight using 1.8GHz processors, couldn't they have waited several months and utilized newer CPUs running at double the speed, halving the computation time?

      This 'research' regarding optimizing for this effect has already been done...

      Here's a google cache of it... [64.233.167.104]

      (I searched on the name of a buddy of mine who worked on this paper to find this, which is why the search terms were like that.)

      Cheers,
      Richard

  • Great pictures (Score:1, Informative)

    by Anonymous Coward
    Everyone, take a look at those pictures. No, not of the results, but of the computer itself [unizh.ch]. The page goes over how they built the thing, with pictures of assembling the nodes, the frame, and the completed box. That's a sight to see, all the internal guts forming that piece of computing power.
  • Wow... (Score:4, Funny)

    by ccharles ( 799761 ) on Saturday January 29, 2005 @03:36PM (#11515199)
    a self-built supercomputer

    I thought we where years away from having to defend ourselves against the machines...
  • WOW (Score:3, Funny)

    by CiXeL ( 56313 ) on Saturday January 29, 2005 @03:54PM (#11515320) Homepage
    Thats alot of porn!
  • I checked the specs and saw the interconnect they used for the cluster is SCI [scizzl.com] provided by a company called dolphin [dolphinics.com]. 64 bit cards, works with linux. Kind of expensive [dolphinics.com], though... SCI reminded me of an old project aimed at transporting TCP/IP packets over SCSI... There's even a page on sourceforge [sourceforge.net] now and some benchmarks.

    Is SCSI P2P used in real world clusters though? How does it compare to SCI or gigabit ethernet? Price? Performance? Status of the project? No idea...

  • While finding nearby moving compact sources of gamma rays might be a way to find these things, another possibility is to look for smaller objects. For example, looking for seismograph records [infosatellite.com] of the passage of a dark matter body through the Earth. Here, the Moon probably is better despite its smaller cross section area since it already has proven to be very quiet seismically.
  • There's a really impressive simulation, at the Hayden Planetarium ("Rose" something to the kids) in NYC, of a zoom-out from the planetarium, through the Solar System, the Milky Way, local groups, etc, to the "big picture": the entire 15B light-years of the known galaxy, with its weird stringy structures. It's generated from a 4D color model, projected inside the theater's dome, for a breathtakingly convincing ride.

    But it lasts only 30 minutes, and has Tom Hanks narrating over the otherwise superb soundsyst
    • This isn't exactly what you're after, but have a look at http://www.redshift.de/ - that's pretty impressive astronomy software.
      • Thanks - it looks like it might be impressive. But I'm not paying $50 to find out (maybe that it's not). Isn't there a visualizer that all the astronomers use to swap around their datasets? I'd expect such a beast to be freely available for Linux, along with the data that our public research creates.
        • Celestia [sourceforge.net] might do some of what you are after. It's gorgeous, but only (!) simulates a hundred thousand stars or so.

          The software that astronomers use for visualisation tend to be either home-grown or else part of very complicated data reduction and analysis packages (eg IRAF, MIRIAD, AIPS++) that nobody in their right mind would want to use if they didn't have to!

  • Could someone explain why these clouds are postulated to be not only dark, but made out of an exotic new particle? Why can't they be clouds of hydrogen? You know... something normal. I can guess, but maybe a physicist would care to respond?
  • The tech is pretty cool, but I wonder if all they're doing with it is calculating modern-day epicycles. [wikipedia.org]

    If the calculations are correct, then Dark Matter accounts for more mass than any single element in the universe, and has and is reacted on by gravity. There should be some of it close by to take a look at... there should be a good deal of it here on earth, as both earth and dark matter have gravity.

    I half-suspect that both dark energy and dark matter are unexpected aspects of gravity working are cosmi
  • When men were men and supercomputers were really super. It used to take a real genius, like Seymour Cray, to design a computer that could be called "super". A bunch of PC's in a custom case just doesn't do it for me.

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...