Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Hardware

Mini-ITX Clustering 348

NormalVisual writes "Add this cluster to the list of fun stuff you can do with those tiny little Mini-ITX motherboards. I especially like the bit about the peak 200W power dissipation. Look Ma, no fans!! You may now begin with the obligatory Beowulf comments...."
This discussion has been archived. No new comments can be posted.

Mini-ITX Clustering

Comments Filter:
  • Imagine.. (Score:3, Funny)

    by hookedup ( 630460 ) on Thursday February 26, 2004 @01:59PM (#8400081)
    A beowulf cluster of these? There, done... and it felt good!
    • by iminplaya ( 723125 ) on Thursday February 26, 2004 @02:20PM (#8400373) Journal
      Too Many Users

      Evidently they didn't cluster enough...
    • Re:Imagine.. (Score:5, Informative)

      by SEWilco ( 27983 ) on Thursday February 26, 2004 @03:12PM (#8401046) Journal
      I might be the originator of this phrase, so I would be qualified to point out that the proper phrasing requires the informative link:
      Imagine a
      Beowulf cluster [beowulf.org] of these.

      The original links went to NASA/GSFC [nasa.gov], but the Beowulf project central site has moved.

    • by madpierre ( 690297 ) on Thursday February 26, 2004 @04:16PM (#8401762) Homepage Journal
      LO, praise of the prowess of people-kings
      of spear-armed Danes, in days long sped,
      we have heard, and what honor the athelings won!
      Oft Scyld the Scefing from squadroned foes,
      from many a tribe, the mead-bench tore,
      awing the earls. Since erst he lay
      friendless, a foundling, fate repaid him:
      for he waxed under welkin, in wealth he throve,
      till before him the folk, both far and near,
      who house by the whale-path, heard his mandate,
      gave him gifts: a good king he!
      To him an heir was afterward born,
      a son in his halls, whom heaven sent
      to favor the folk, feeling their woe
      that erst they had lacked an earl for leader
      so long a while; the Lord endowed him,
      the Wielder of Wonder, with world's renown.
      Famed was this Beowulf

      Sample from the Project Gutenberg Text of Beowulf.

      Why not do yourself a favour and download it. Classic stuff. :)
    • Re:Imagine.. (Score:3, Insightful)

      by Cypherus ( 675743 )
      Screw Beowulf Clusters...Open Mosix Clusters are where it's at! http://openmosix.sourceforge.net
  • Imagine... (Score:5, Funny)

    by Chmarr ( 18662 ) on Thursday February 26, 2004 @02:00PM (#8400091)
    ... a beowulf cluster of obligatory beowulf cluster comments.
  • by October_30th ( 531777 ) on Thursday February 26, 2004 @02:01PM (#8400101) Homepage Journal
    I thought about this some time ago.

    I decided against a mini-ITX cluster because the floating point performance (why else would you build a cluster?) of VIA CPUs is just abyssmal.

    Is there any reason why there are no P4 or AMD mini-ITX mobos around?

    • by wed128 ( 722152 ) on Thursday February 26, 2004 @02:02PM (#8400121)
      i would imagine they run too hot for such a small form factor...this is just a guess, so treat it as such.
    • by J3zmund ( 301962 ) on Thursday February 26, 2004 @02:08PM (#8400196)
      They might be on their way. Here's [commell.com.tw] a 1.7 GHz Pentium M.

    • The reason why you don't see any Mini-ITX mobos around the Athlon, is power consumption. I recently built a mini-ATX computer around a T-Bird (1gHz, should have picked something less of an oven), and the mini-ATX power supply crapped out on me, making me buy a REAL ATX powersupply. Gah, still can;t find a 300 WAtt mini-ATX supply.

      Btw, you're wrong - there ARE P4-based mini-ITX mobos.
    • by -tji ( 139690 ) on Thursday February 26, 2004 @02:12PM (#8400264) Journal
      There are P4 Mini-ITX systems available: Pentium 4 [silentpcreview.com]

      But, most mini-itx systems are very small in size, and strive for quiet or silent operation. So, there are obvious problems with the P4's heat/power requirements. Perhaps a better solution is the Pentium-M in a mini-itx form factor. It has pretty good performance, at a low power/heat level: Pentium M [commell.com.tw]. But, most of the Pentium-M boards are intended for industrial or OEM use, so they are hard to find in retail, and are pretty expensive.

    • by niko9 ( 315647 ) * on Thursday February 26, 2004 @02:17PM (#8400338)
      How about Fujitsu's mini-tx form factor for the Pentium M proc. Runs passive (huge heatsink, but passive nonetheless) and uses less electrons.

      Coudn;t find a link though, sorry.
    • There are supposedly some Pentium M boards around, as well as 4s... in fact, if you look at Mini-ITX.com's store, they're selling a P4 mini-itx board. If only it's one slot was AGP and not PCI, that would make a hell of a small little gaming box...
    • by mi ( 197448 ) <slashdot-2017q4@virtual-estates.net> on Thursday February 26, 2004 @02:25PM (#8400442) Homepage Journal
      the floating point performance (why else would you build a cluster?)
      • To crack encryption?
      • To compile big projects?
      • To compress huge files?

      The floating point is just a convenience. Almost any algorithm can be modified to work with fixed point precision -- and without loss of performance.

      Of course, many people will insist, they need FP to be able count dollars and cents -- they don't even think of counting cents (or any other fractions of the dollar) with integers, for example.

      These are, usually, the same people, who have troubles defining bit...

      • by bluGill ( 862 )

        Perhaps many people would insist on using FP dollars and cents, but those people are fools, and it is very easy to part them with their money. Just make sure all the rounding errors work out in your favor, which isn't hard if you have access to their accounts.

        Yeah I know that for small numbers FP has no rounding errors, but that doesn't last long.

        • Think "Office Space" people...

          Peter Gibbons: Um, the 7-Eleven, right? You take a penny from the tray.
          Joanna: From the crippled children?
          Peter Gibbons: No, that's the jar. I'm talking about the tray, the pennies for everybody.
      • the floating point performance (why else would you build a cluster?)

        * To crack encryption?
        * To compile big projects?
        * To compress huge files?


        How about scientific computing? That's really the big thing that keeps cluster computing alive. Cracking encryption is the only thing on that list that makes sense. The other stuff shows your lack of knowledge of other disciplines by the fact that you think these are computationally expensive tasks.
        • Mars is not made any closer to Earth by the revelation, that Alpha Centauri is really far...

          How about scientific computing?

          This is why you might need the FP performance. I was answering a totally different question -- what would you do without the good floating point performance.

          The other stuff shows your lack of knowledge of other disciplines by the fact that you think these are computationally expensive tasks.

          Thank you, thank you.

          Would you, please, demonstrate, how I can rebuild a project of 300

      • by mangu ( 126918 ) on Thursday February 26, 2004 @03:04PM (#8400961)
        The floating point is just a convenience. Almost any algorithm can be modified to work with fixed point precision -- and without loss of performance.


        But at a significantly higher development and debugging cost. Why go for integer adaptation, if a P4 can do four FP operations in one clock, using SSE2? I have tested my 2.4GHGz P4 at 6 gigaflops, in a practical application doing matrix inversion. The theoretical maximum for my machine would be 9.6 Gflops. If you RTFA, you'll see they mention 3.6 Gflops performance for their cluster, about 60% of my single-processor system. I see no point at all in building that cluster.

        • by addaon ( 41825 ) <(addaon+slashdot) (at) (gmail.com)> on Thursday February 26, 2004 @04:06PM (#8401658)
          See, that's how I used to think. G4 at 800MHz... 4 fp operations in parallel with altivec... 3.2GFlop goodness. But of course, why stop there? With various tuning, you can get up to 32-way parallel integer math (although going beyond 16, admittedly, sucks). 3.2 GFlop is nice, but 25.6 G-ops ain't too shabby.
        • Power. (Score:3, Interesting)

          Your P4 uses what, >300W? This cluster has a peak load of 200W. Plus you can do more varieties of hardware interfacing at once. That's a reason to build this cluster, if you don't find that clustering things because you can to be a good enough reason.
          • Re:Power. (Score:3, Informative)

            by mangu ( 126918 )
            Your P4 uses what, >300W? This cluster has a peak load of 200W.

            Well, I just applied my, admittedly imprecise, clamp ammeter to the power cable, and got ~2 amps @ 120 V = 240 W. Which means, 240W/6Gflops = 40W/Gflop. That cluster has 200W/3.6 Gflops = 55.555... W/Gflop. Slightly worse...

            I admit that hardware interfacing is getting to be a problem for us hobbyists, since the demise of the ISA bus, but I have been able to get along with the parallel interface. I just hope the USB interface doesn't get to

      • by QuantumFTL ( 197300 ) * on Thursday February 26, 2004 @05:51PM (#8402564)
        The floating point is just a convenience. Almost any algorithm can be modified to work with fixed point precision -- and without loss of performance.

        Apparently you've never done any numerical computing, especially of the scientific variety. In an astrophysics simulation, for instance, the density of a field may span over 20 orders of magnitude, hardly reasonable to do with fixed point arithmetic.

        Not to mention that many iterative algorithms can oscillate wildly in the presence of numerical error.

        It is true that there are many other uses for a cluster besides numerical computing, however the idea that any floating point algorithm can be converted to fixed point could not be more wrong.

        Disclaimer: My research at Cornell University is high performance clustered numerical computing.

        Cheers,
        Justin
    • by steveha ( 103154 ) on Thursday February 26, 2004 @02:41PM (#8400677) Homepage
      the floating point performance [...] of VIA CPUs is just abyssmal.

      Older C3 cores run the FPU at half the clock rate. If you get the fanless 600 MHz EPIA motherboard, the FPU will be running at 300 MHz.

      The newer, Nehemiah core C3 chips run the FPU at full clock speed. Any C3 newer than Nehemiah should run the FPU at full speed.

      He used the VIA EPIA V8000A motherboard with an Eden core CPU. From what I found on google (here [hardwareirc.com]), the Eden core does run the FPU at full clock speed.

      In any event, he said the cluster has more processing power than a four-P4 SMP system, while taking less electricity to run. And it will be quieter and more reliable. I'd like to see actual benchmarks, but it seems like it makes enough sense.

      I read about a cluster of PocketPCs, and that didn't make practical sense. It was just a fun project.

      steveha
      • he said the cluster has more processing power than a four-P4 SMP system

        Whoops, I made a mistake. He actually said his 12-node VIA cluster has more power than "four 2.4 GHz pentium 4 mcahines used in parallel". Not SMP!

        Sorry about the mistake.

        steveha
      • by merlin_jim ( 302773 ) <James.McCracken@stratapul t . com> on Thursday February 26, 2004 @03:54PM (#8401536)
        He used the VIA EPIA V8000A motherboard with an Eden core CPU. From what I found on google (here), the Eden core does run the FPU at full clock speed.

        I have the VIA EPIA 8000 (not sure what the V and A modifiers mean), with an Ezra core. FYI, Eden isn't a core, it's an initiative. The VIA Eden is aka VIA EPIA 5000, and was the first fanless Mini-ITX. Eden was the development product moniker, and came to refer to the motherboard that was first produced from that initiative. It can also refer to any C3 CPU made to run fanless.

        Back onto the original topic; my EPIA 8000 with an Ezra core runs the FPU at half clock. This document [via.com.tw] on the differences between the Ezra/Ezra-T and Nehemiah cores indicates that one of the fundamental differences between the two is the full speed FPU. So I doubt that the article you quoted is accurate...

        Just some more info... Nehemiah was manufactured at 933 MHz, 1 GHz, and speeds up to 2 GHz are planned. The Ezra was manufactured at 533 MHz and 800 MHz in its first run; the 533 is also known as the Eden. The Ezra-T (the second run of the Ezra) was made at 600 MHz (aka Eden), 800 MHz, 933 MHz, and 1 GHz.
    • by dabadab ( 126782 ) on Thursday February 26, 2004 @03:02PM (#8400930)
      There's one thing that makes VIA CPUs very interesting performance-wise: the xcrypt instruction. Using it the VIA CPUs just beat - and beat badly - anything else in certain task.

      Check out Theo de Raadt's little benchmark:
      http://marc.theaimsgroup.com/?l=openbsd-misc&m=107 577297024182&w=2 [theaimsgroup.com]
    • Mod me....

      Informative:

      If you're looking for a small form factor for high-end processors, you will likely find future products using the picoBTX form factor. The motherboard layout provides better cooling for hot processors that mini-ITX can't address. Here's a summary of the BTX form factors from Anandtech [anandtech.com].

      Interesting:

      Has anyone figured out how to use the floating point power in their graphics cards for non-video applicaitons? Those things are becoming powerful that they use their own heat sinks. Ju
  • by Vexler ( 127353 ) on Thursday February 26, 2004 @02:01PM (#8400107) Journal
    Just imagine Dilbert's boss asking him for a Beowulf cluster.

    Kind of like that strip where he (the boss) wanted to have a SQL database in lime.
  • by Space cowboy ( 13680 ) on Thursday February 26, 2004 @02:02PM (#8400116) Journal
    ... but that's about all it'll be useful for. A Nehemiah CPU is really weedy by todays standards, even the 1GHz one is about the same as a 600MH P3. So, he's got 12 of them, which is probably less CPU power than an average dual P4 motherboard...

    Still, you can get some stats on how the clustering works, what's the best algorithm for dispersing problems, and these boards are cheap, but that's about the only advantage I can see...

    Simon
    • by addaon ( 41825 ) <(addaon+slashdot) (at) (gmail.com)> on Thursday February 26, 2004 @02:24PM (#8400428)
      I agree, but that's actually a very interesting use. It also lets you play around with network topologies, and interconnects, and such. And of course, these boards do have one PCI slot, as well as the standard assortment of serial and parallel, so the hardware people can have fun too. For real number crunching? Not a chance. For doing a $2000 prototype, in 15 nodes, of a $50000 50-node cluster? I can't really think of a more flexible, more convenient, or more affordable option. For doing a $1000, 6-node flexible network simulator, purely for education? Also more than worth it, with few other options around.
    • There are no dual boards for normal P4s since they can't runt in SMP mode. You have to buy Xeons and they arn't exactely cheap. Dual AMD Athlons (the MP model or a modded XP) are your only option for a cheap dual desktop.
    • by jepaton ( 662235 ) on Thursday February 26, 2004 @02:33PM (#8400540)

      A beowolf of mini-itx boards is probably the cheapest way to get bragging rights. As a practical way of fast and cheap parallel computation they are not.

      However, I have purchased three (V10000 boards) thus far and intend to add more to my network as low power (as in Watts) servers.

      I worked out that given the power of 10.78W (source: mini-itx.com's power comparison tool) for the V series (probably the one with the slowest CPU in the series, board only), I could save a fortune on electricity compared to a more regular computer.

      The electricity company sells electricity at the rate of 0.63 ($1.18) per watt per year. Compared with a standard PC of 100W, I can regain the purchase costs (in savings) of the board and memory within two to three years.

      Also, I found rack mount chassis [icp-epia.co.uk] available cheaper than one for a regular sized case. This influenced my decision a little - who doesn't want a network of rack mounted computers?

      Overall, because of the low price and low power the mini-itx boards are a no brainer if and only if the CPU power of each computer isn't important.

      Jonathan

    • I actually have a small cluster of similar mini-itx boards (though in 1U chassis) for testing changes on our 160 node FreeBSD cluster. It's especially helpful as our main cluster is 1000 miles away so having a local cluster to use for crash tests is very helpful. I choose these systems because I've got enough powersucking servers on 24/7 at home. The ones I've got consume around 1/8th the power on a standard dual Xeon node at 1/5th the cost. Sure performance sucks, but who cares. It's there to do infra
    • It would be quite useful for a university with an undergraduate course in high performance computing to have their own little NoRMA cluster to play with without the space, heat, and power consumption of a supercomputer.

      Let the researchers use the real supercomputer, but the undergraduates can still play with message passing parallel algorythms to their hearts content.

    • So, he's got 12 of them, which is probably less CPU power than an average dual P4 motherboard...

      RTFA... he compares performance to 4-6 P4s. He does clustering for a living so I'm assuming he knows how to measure and compare performance at this scale...
    • Samba file server.

      Samba throws open a hell of a lot of threads. (At least on my network of 200 people.) A cluster with each node posessing an external network port would be able to split the threads across dedicated processors. Not too useful for me, but if someone was trying to serve a few thousand clients at a time, that would be useful.

      TMYK

  • Seriously, though... (Score:5, Interesting)

    by Short Circuit ( 52384 ) <mikemol@gmail.com> on Thursday February 26, 2004 @02:02PM (#8400122) Homepage Journal
    All things considered, what's the cost-per-tflop of that sort of system. These guys don't require as much cooling, space, or whatever else you care to think about.

    Has anyone tried stuffing several into a single 1U chassis? For a sort of cluster of clusters?
  • shuttle (Score:3, Interesting)

    by trmj ( 579410 ) on Thursday February 26, 2004 @02:02PM (#8400126) Journal
    My favorite use for those mini-itx boards is making a nice shuttle [shuttle.com] xpc. Cheap, fast gaming computers that are quite portable as well.

    The only problem I've found so far is they ony come with nvidia onboard graphics, but that's what the agp slot is for.
  • Imagine... (Score:5, Funny)

    by Anixamander ( 448308 ) on Thursday February 26, 2004 @02:04PM (#8400141) Journal
    ...a new, original joke. Now imagine another one, because that last one wasn't that funny.

    In fact, maybe you just aren't that funny. Except in Soviet Russia.

    Shit, now I'm doing it.
  • This with Chess (Score:3, Interesting)

    by SamiousHaze ( 212418 ) on Thursday February 26, 2004 @02:05PM (#8400150)
    You know I seriously wonder if this would be a viable option for Computer chess programs (http://www.chessbase.com/newsdetail.asp?newsid=25 ). It certainly is getting cheap to get massive hardware processing power.
  • by JimmyQS ( 690012 ) on Thursday February 26, 2004 @02:06PM (#8400170) Homepage
    We studied 3 mini beowulf systems a while back, here at University of Central Florida, one of which was a mini-ITX beowulf. Here's some info and preliminary results: http://helios.engr.ucf.edu/beowulf/miniature.phtml
  • Why not 16 nodes, or some other power of 2?
  • Cool stuff ... (Score:5, Interesting)

    by Lazy Jones ( 8403 ) * on Thursday February 26, 2004 @02:07PM (#8400191) Homepage Journal
    This rocks - we were considering something similar for our clustering-R&D needs (for trying out new network file systems, failover solutions etc.), but we decided to go with plain P4 barebones instead. They can be stacked nicely, are relatively quiet and the fast CPUs with HT come in handy when you want good latencies at CPU-intensive tasks (dynamic websites etc.).

    Here's a picture [amd.co.at] of our first 4 boxes. The USB stick seen sticking out from one of the boxes is bootable and an excellent replacement for floppy disks...

  • Hmmm (Score:5, Funny)

    by captain_craptacular ( 580116 ) on Thursday February 26, 2004 @02:08PM (#8400200)
    There was no cutting or bending involved. All metal bits were simply cut, drilled, and bolted together using 4-40 hardware.

    So what was it? No cutting, or cutting?
  • FLASH... (Score:2, Interesting)

    Ouch...He's using flash as the HD for the computing nodes. Hope they're set to be mounted read-only.

    Maybe he should consider PXE instead.
    • Re:FLASH... (Score:5, Interesting)

      by technomancerX ( 86975 ) on Thursday February 26, 2004 @02:16PM (#8400324) Homepage
      "He's using flash as the HD for the computing nodes"

      Actually, he's not. IBM Micro Drives are not CF, they just have a CF form factor/interface to be compatible with hand held devices. They are hard drives.

      • Re:FLASH... (Score:3, Informative)

        by dabadab ( 126782 )
        Despite the name, CF is NOT flash memory. The CompactFlash Association's definition is this:
        "CompactFlash(R) is a small, removable mass storage device."

        So you are correct in noting that he is actually using HDDs, not flash, but in the same time, he is using CompactFlash (BTW the CF pinout is IDE compatible, so to hook up your CF to your IDE bus all you have to do is to manage to connect the wires of the IDE cable and the power cable to the card)
  • by Alioth ( 221270 ) <no@spam> on Thursday February 26, 2004 @02:09PM (#8400206) Journal
    Whilst not clustering, a good use for these low power systems would be for web hosts or budget dedicated servers. I'm sure a server room full of these would require much less airconditioning (and power) than the typical servers. Many people require dedicated servers for security (they are the only one on the box) and don't require fast FPU performance.
  • Just hit reload! It seems to be holding up just fine, with the occasional bad hit. Gotta give 'em a break, this is /. after all.
  • by pegr ( 46683 ) on Thursday February 26, 2004 @02:10PM (#8400231) Homepage Journal
    Just what do you do with such a thing? I don't mean obvious commercial uses, but as a home-bound geek, what reason can I use to justify this to my wife?
    • I wonder too (Score:3, Insightful)

      by Atario ( 673917 )
      Well...I'd be able to get major numbers in SETI@Home...um...

      Video encoding? (Now, where'd I put that parallel-processing version of AVISynth?)

      Rent it out to a university?

      Program it to solve chess and leave it going till it does?

      Get a decent frame rate in any FPS, once and for all? (Note to self: develop parallel-processing graphics card.)
  • Test Text. (Score:4, Informative)

    by F34nor ( 321515 ) * on Thursday February 26, 2004 @02:11PM (#8400242)
    I built a Mini-ITX based massively parallel cluster named PROTEUS. I have 12 nodes using VIA EPIA V8000, 800 MHz motherboards. The little machine is running FreeBSD 4.8, and MPICH 1.2.5.2. Troubles installing and configuring Free BSD and MPICH were few. In fact, there were no major issues with either FreeBSD or MPICH.

    The construction is simple and inexpensive. The motherboards were stacked using threaded aluminum standoffs and then mounted on aluminum plates. Two stacks of three motherboards were assembled into each rack. Diagonal stiffeners were fabricated from aluminum angle stock to reduce flexing of the rack assembly.

    The controlling node has a 160 GB ATA-133 HDD, and the computational nodes use 340 MB IBM microdrives in compact flash to IDE adapters. For file I/O, the computational nodes mount a partition on the controlling node's hard drive by means of a network file system mount point.

    Each motherboard is powered by a Morex DC-DC converter, and the entire cluster is powered by a rather large 12V DC switching power supply.

    With the exception of the metalwork, power wiring, and power/reset switching, everything is off the shelf.

    At present, the idle power consumption is about 140 Watts (for 12 nodes) with peaks estimated at around 200 Watts. The machine runs cool and quiet. The controlling node has 256 MB RAM , and an 160 GB ATA 133 IDE hard disk drive. The computational nodes have 256 MB RAM, each and boot from 340 MB IBM microdrives by means of compact flash to IDE adapters. The computational nodes mount /usr on the controlling node via NFS, for storage and to allow for a very simple configuration. No official benchmarks have been run, but for simple computational tasks the mini cluster appears to be faster than four 2.4 GHz pentium 4 mcahines used in parallel, at a fraction of the cost and power use.

    Power and Cooling

    Mini-ITX boards have very low power dissipation as compared to most motherboard/cpu combination in popular use today. This means that a Mini-ITX cluster with as many as 16 nodes won't need special air conditioning. Low power dissipation also means low power use, so you can use a single inexpensive UPS to provide clean AC power for the nodes.

    In contrast, a 12-16 node cluster built with Intel or AMD processors will generate enough heat that you will likely need heavy duty air conditioning. Additionally, you will need adequate electrical power to deliver the 2-3 kilowatts peak load that your 12 node PC cluster will require. Plan on having higher than average utility bills if you use PC's...

    Hardware Construction

    The cluster is built in two nearly identical racks. Each rack has two stacks of three motherboards and dc-dc converters mounted on aluminum standoffs.

    The compact flash adapters used to mount the microdrives are also in stacks of three. Each stack of boards is mounted on a 7 inch by 10 inch 0.0625 thick 6061-T6 aluminum plate as are the microdrive stacks. There are seven metal plates in all, in each rack.

    The top cover plate has the mounting bracket for the 6 on/off/reset switches.

    The plate below it is home to the power distribution terminal block. The power delivery cable for each rack is heavy duty 14 gauge stranded wire with pvc insulation. The power cabling from the terminal strip to each of the dc-dc converters is 18 gauge stranded pvc insulated hookup wire. The wiring for the power/reset switches is 24 gauge stranded, pvc insulated wire.

    The top rack houses nodes one through six (node one is the controlling node). The bottom plate of the top rack also houses the 160 GB ATA-133 hard disk drive used by the controlling node. All other nodes make use of the IBM microdrives. Node number three has a spare compact flash adapter which can be used to duplicate microdrives for easy node maintenance.

    The disk drive and power cabling to the motherboards was dressed as was sanely possible on the back panel. The liberal use of nylon cable ties helps reduce the ten
  • by caffeinefiend ( 681092 ) on Thursday February 26, 2004 @02:13PM (#8400270)
    Yet another example of why you shouldn't do everything that you can do! These puppies aren't exactly famous for their flop-per-dollar ratio. In truthfully, it would be more efficient ( and cost effective) to make the cluster out of PIIIs. Anyhow, I'm off to go cluster a few toaster ovens, I hear that they offer a great delicious to efficiency ratio. Chris
    • Yes, but see my comment here [slashdot.org], and its parent comment, for why this is an interesting option, even if not the best performance.
    • by enkidu ( 13673 ) on Thursday February 26, 2004 @02:38PM (#8400628) Homepage Journal
      Efficiency can take on many meanings depending on what your objective function looks like. Undoubtedly you can get more FLOP for the $. But that isn't why you'd use a setup like this. I could also see a use for this if you were trying to optimize for FLOPs / Watt. Or FLOPs / dB. Or FLOPs / ft^3. This kind of a computing setup seems to be optimized for low-power, low noise, low-maintenance and small space uses. I can definitely envision scenarios where you could optimally arrive at such a setup.
      • Flops/$$$ = free (Score:4, Insightful)

        by poptones ( 653660 ) on Thursday February 26, 2004 @06:58PM (#8403180) Journal
        As a green geek I can't resist pointing out this merit: with only a 200W power dissipation this would be "home friendly" even in a non air conditioned house during the hot Mississippi summers. And with only a 200W PEAK draw, the entire system could be powered by a single PV panel and one or two storage batteries. Trade the "high quality UPS" for a couple of batteries and a PV panel (or cheaper still if you're in the midwest or near a coastline, a windmill) and you have a cluster that could run without any "store bought" AC at all.
  • slashdotted already? (Score:5, Informative)

    by cetan ( 61150 ) on Thursday February 26, 2004 @02:14PM (#8400292) Journal
    sheesh that didn't take long.

    I managed to get it mirrored here:
    page 1:
    http://www.phule.net/mirrors/mini-itx-cluster.html [phule.net]
    page 2:
    http://www.phule.net/mirrors/mini-itx-cluster2.htm l [phule.net]
    page 3:
    http://www.phule.net/mirrors/mini-itx-cluster3.htm l [phule.net]
  • by aztektum ( 170569 ) on Thursday February 26, 2004 @02:24PM (#8400422)
    They musta been runnin' their webserver on one!

    *ba dum ch*
  • by Cpt_Kirks ( 37296 ) on Thursday February 26, 2004 @02:26PM (#8400455)
    I can't wait for the new, smaller nano-itx boards to come out. 4.5" on a side, 1GHZ CPU and draws 7 watts. I got an email from VIA claiming they will be released in April.

    MB, slim DVD and laptop HD in a case the size of a large paperback book!

    It will make my "K-Mart Toolbox Mini-ITX PVR" look like a full tower in comparison!

  • Sounds Fun (Score:5, Interesting)

    by RAMMS+EIN ( 578166 ) on Thursday February 26, 2004 @02:28PM (#8400476) Homepage Journal
    I have been thinking about this lately. I get disgusted by the fanns everywhere (especially since the one in my laptop makes an awful amount of noise sometimes and still doesn't prevent the beast from overheating and shutting down). Aside from being noisy, computers have way more CPU power than I need, and cost more than I am willing to spend. And they suck up a lot of power. (Some might add that they take a lot of space.)

    I think all of these could be solved at once. What if someone built low-power, low-noise, and low-cost computer, good enough for running light office applications? I don't mean OpenOffice, but rather lightweight programs that implement the functionality people use _without_ the bloat. My 486 handles email just fine and the WYSIWYG word processors were once satisfied with a first-generation Pentium (and even these were already bloated).

    Current PDAs have more than enough processing power to handle those tasks, and I've noticed that company's like gumstix [gumstix.org] build and sell devices almost like what I have in mind (the gumstix don't seem to have display connectors, though). Hey, these machines could actually be portable and have a really decent battery life (more than a full working day); that would be a killer!

    Am I just daydreaming here or are others with me? Maybe you know of devices that do this job? Someone recommended Sharp's Zaurus, which is excellent, but still rather more expensive than what I have in mind.
  • Massively Parallel (Score:5, Insightful)

    by Seanasy ( 21730 ) on Thursday February 26, 2004 @02:30PM (#8400501)
    I built a Mini-ITX based massively parallel cluster named PROTEUS. I have 12 nodes using VIA EPIA V8000, 800 MHz motherboards.

    I'd just like to point out that 12 nodes is not "massively parallel."

  • Hey gang,
    I'd really like to build and use my own cluster, as I do have some MPI experience from college. The only question is: What are they good for at home? I just can't justify the expense to myself without figuring out what I could really do with a cluster if I built one.

    Ideas saught!

    ~D
  • Why would he use microdrives with a CF to IDE converter? Why not 2.5" drives? You could probably get larger, faster disks for the same price.
  • by merlin_jim ( 302773 ) <James.McCracken@stratapul t . com> on Thursday February 26, 2004 @03:39PM (#8401366)
    I mean, those IBM 340 MB microdrives aren't really that cheap... you can get full size hard drives for the same price...

    I've always wondered; why not PXE boot something like this? Set your node controller to also do DHCP and you're set.

    While you're at it, use the CL version for the controller which has two network cards and build a NATTING firewall into the node controller too. Then you have a plug-in appliance that doesn't interfere with your network topology at all. PXE boot it and the motherboards will only need RAM.

    The board he used is available for $99 with proc. A stick of 256 is probably around $20.

    The best price froogle would give me on the drives he's using is $60, and they're prone to wear and tear.

    Add in the $10 CF-IDE adapter and the drive is %60 of the cost of the motherboard itself...

    Hell if you don't want the network bogged down with a bunch of PXE booting nodes all the time, just get cheap CD drives and put dyne:bolic [dynebolic.org] on it, which does automagic clustering...

    Personally, if I were to do it, I'd set dynebolic to PXE boot, get a huge stack of motherboards and RAM, and do it that way. Then adding/changing nodes is relatively simple... IIRC, they're even factory set to try PXE booting if no IDE devices are found...

    The only other change I would make would be to ditch the 16-port switch... move to 4-ports, connect those to a 4-port with gigabit uplink, and connect that to a gigabit switch. Of course at this point I'm talking about really scaling the cluster up, to a few hundred nodes or so. At that point I'd stop using a mini-ITX board for my node controller and go with a motherboard with a bit more juice behind it, dual procs, RAID 0/1, the whole shebang...

    Now if only I had a couple grand burning a hole in my pocket... speaking of which:

    motherboard: $100
    RAM: $20
    DC-DC converter: $30
    CF adapter: $10
    Microdrive: $60

    Total: $220
    Total PXE booter: $150
    Savings: 30%

    So, not counting the costs of cabinets, power rectifier/UPS, wiring, network gear, and labor, you can increase the size of your cluster by %30 for the same cost, just for setting up PXE boot...

1000 pains = 1 Megahertz

Working...