Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Hardware

Linux Desktop Clustering - Pick Your Pricerange 199

crashlight writes: "A Linux cluster on the desktop--Rocket Calc just announced their 8-processor "personal" cluster in a mid-tower-sized box. Starting at $4500, you get 8 Celeron 800MHz processors, each with 256MB RAM and a 100Mbps ethernet connection. The box also has an integrated 100Mbps switch. Plus it's sexy." Perhaps less sexy, but for a lot less money, you can also run a cluster of Linux (virtual) machines on your desktop on middle-of-the-road hardware. See this followup on Grant Gross's recent piece on Virtual Machines over at Newsforge.
This discussion has been archived. No new comments can be posted.

Linux Desktop Clustering - Pick Your Pricerange

Comments Filter:
  • Rack Density (Score:2, Interesting)

    by Genady ( 27988 ) <gary.rogers@NOSPaM.mac.com> on Tuesday January 22, 2002 @03:43PM (#2883554)
    So... how many processors can you fit into a standard 44U enclosure now? If they've got an integral Ethernet switch do you get a gigabit uplink out? This would actually be really cool for Universities/Government Agencies to build insanely great clusters with small floor space. Still if you want insanely great maybe you should cluster a few briQ's [terrasoftsolutions.com] together.
  • by rhdwdg ( 29954 ) on Tuesday January 22, 2002 @03:46PM (#2883566) Homepage
    I could. The form factor is the thing. I could use a few extra CPUs in a MOSIX cluster for my desktop, but I have no room for a small rack and associated power. This fits. I could make them into little application clusters -- 256 MB of flash is plenty per device. I could wish they had GigE, of course (since they obviously need to connect to NAS for data) or multiple NICs per system but even 100 Mb is sufficient for the intended markets.
  • only 100mbps? (Score:4, Interesting)

    by Restil ( 31903 ) on Tuesday January 22, 2002 @03:47PM (#2883575) Homepage
    The primary disadvantage of clustering is the network bottleneck. You lose out because even 100mbps is only a small fraction of what the pci bus of even low end pentium systems are able to handle. At LEAST go with gigabit ethernet so you can push over 100 megs per second between processors. This will greatly increase the usefulness of an integrated cluster by decreasing the one primary disadvantage.

    Also a bit pricey, but there would be some cost advantage in reduced footprint for some environments.

    -Restil
  • by PD ( 9577 ) <slashdotlinux@pdrap.org> on Tuesday January 22, 2002 @03:48PM (#2883582) Homepage Journal
    I just thought of something else. I have never used a Beowulf cluster, so maybe I'm completely wrong, but virtual machines could make a Beowulf more easily upgradeable. The idea is that you'd make a cluster with a whole bunch of virtual machines, say 1024. The cluster is fixed at that size for all the software that runs. But in reality, you've got 32 processors actually running. When you upgrade the cluster to 64, you don't need to reconfigure any of the software that runs on the cluster, because they all assume that you've got 1024 processors. But, you get a performance increase because there's now more physical processors. As I said before, I don't know much about clusters. I imagine that somebody who really does know will quickly either confirm what I said or reduce my idea to a pile of stinking rubble.
  • by cweber ( 34166 ) <<moc.liamg> <ta> <dsrebewc>> on Tuesday January 22, 2002 @04:09PM (#2883708)
    You're mostly off the mark, I'm afraid. Most software that uses a cluster runs through MPI or simply through scripts. Both mechanisms allow for easy adjustment in the number of nodes/CPUs you use.

    Many large compute problems are embarassingly parallel, i.e. the same calculation needs to be repeated with slightly different input parameters. There's basically no interprocess communication, just a little forethought about filenaming conventions, total disk and memory usage, etc.
    Execution of such tasks reduces essentially to a simple loop:
    foreach parameter_set
    rsh nodeN myprog outfileM
    end

    For those programs that actually run a single instance of the code on several CPUs, you have to be acutely aware of how many nodes you use. Your code has its own limits on how well it scales to multiple CPUs, and your cluster imposes limits on how well (in terms of latency and bandwidth) nodes can communicate. Very few codes in this world scale well beyond 64 CPUs, especially not on run-of-the-mill clusters with plain ethernet interconnects. Fortunately, it is trivial to readjust the number of nodes used for each invocation of the code.

    Lastly, virtual nodes cannot easily simulate the behavior of real nodes. Again, it's the interconnect latency and bandwidth. When it comes to supercomputing, only trust what you have run and measured on a real life setup with your own code and input data.
  • by qurob ( 543434 ) on Tuesday January 22, 2002 @04:31PM (#2883838) Homepage
    Although I'm joking, let's just take a look at some numbers, hypothetically speaking.

    *borrowed from Tom's Hardware*

    Linux Compiling Test

    3.35 minutes for a Athlon XP 2000+
    14.2 minutes for a Intel Celeron 800mhz

    (now, here's where we stretch it)

    Figure 1.7 minutes for a dual Athlon XP 2000+, 50% of the other time.

    1.7 x 8 = 13.6 minutes


    But, who really compiles with a cluster, really?

    It'd still be faster....At least on a few benchmarks, and at least in theory
  • Cost of upgrading what? Did you even read the article? This is a CLUSTER, not your run-of-the-mill desktop or workstation. I could get linux to easily run on an old 486 motherboard that is somewhere in the bottom of my closet.


    If any OS is expensive due to upgrades, it is definately Micro$oft OS's. Can you see Windows XP running on a 486 33 mhz? I thought not.


    Additionally linux cost a LOT less to administer by IT shops than Microsoft operating systems. With Microsoft operating systems, you have to *click* here, *click* in this text field,etc. I could have ipchains up and running fast than you could have NAT running on Windows2000.


    However, this cluster is a great solution to a lot of problems. It would definately free up colocation rack space, and make it easier to do virtual hosting.


    r00tdenied
  • Claims about VMs (Score:2, Interesting)

    by zaqattack911 ( 532040 ) on Wednesday January 23, 2002 @01:36AM (#2886444) Journal
    Ok.. so VMs makes sense because it allows seperate virtual linux O/Ss for each major server function. (web database ftp..). This is good for management and accounts. However, "they" are still making claims that it offers somekind of performance advantage over running them all on one O/S. I mean... call me crazy but I don't see how there is much of a performance advantage.. if not just wasted memory here. Someone please throw me a clue.

interlard - vt., to intersperse; diversify -- Webster's New World Dictionary Of The American Language

Working...