Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Hardware

Ask Slashdot: Building a Cheap Computing Cluster? 160

New submitter jackdotwa writes "Machines in our computer lab are periodically retired, and we have decided to recycle them and put them to work on combinatorial problems. I've spent some time trawling the web (this Beowulf cluster link proved very instructive) but have a few reservations regarding the basic design and air-flow. Our goal is to do this cheaply but also to do it in a space-conserving fashion. We have 14 E8000 Core2 Duo machines that we wish to remove from their cases and place side-by-side, along with their power supply units, on rackmount trays within a 42U (19", 1000mm deep) cabinet." Read on for more details on the project, including some helpful pictures and specific questions.
jackdotwa continues: "Removing them means we can fit two machines into 4U (as opposed to 5U). The cabinet has extractor fans at the top and the PSUs and motherboard fans (which pull air off the CPU and remove it laterally — (see images) face in the same direction. Would it be best to orient the shelves (and thus the fans) in the same direction throughout the cabinet, or to alternate the fan orientations on a shelf-by-shelf basis? Would there be electrical interference with the motherboards and CPUs exposed in this manner? We have a 2 ton (24000 BTU) air-conditioner which will be able to maintain a cool room temperature (the lab is quite small), judging by the guide in the first link. However, I've been asked to place UPSs in the bottom of the cabinet (they will likely be non-rackmount UPSs as they are considerably cheaper). Would this be, in anyone's experience, a realistic request (I'm concerned about the additional heating in the cabinet itself)? The nodes in the cabinet will be diskless and connected via a rack-mountable gigabit ethernet switch to a master server. We are looking to purchase rack-mountable power distribution units to clean up the wiring a little. If anyone has any experience in this regard, suggestions would be most appreciated."
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Building a Cheap Computing Cluster?

Comments Filter:
  • Imagine (Score:5, Funny)

    by BumbaCLot ( 472046 ) on Tuesday March 12, 2013 @12:50PM (#43150925)

    A beowulf cluster of these! FP

    • Awesome... it feels like /. circa 2000 again.
      • Re:Imagine (Score:5, Interesting)

        by Ogi_UnixNut ( 916982 ) on Tuesday March 12, 2013 @01:31PM (#43151407) Homepage

        Yeah, except back in the 2000's people would be thinking it is a cool idea, and would be at least 4 other people who have recently done it and can give tips.

        Now it is just people saying "Meh, throw it away and buy newer more powerful boxes". True, and the rational choice, but still rather bland...

        I remember when nerds here were willing to do all kinds of crazy things, even if they were not a good long term solution. Maybe we all just grew old and crotchety or something :P

        (Spoken as someone who had a lot of fun building an openmosix cluster from old AMD 1.2GHz machines my uni threw out.)

        • by Anonymous Coward

          The difference is that we take the clustering part for granted now. The question wasn't something interesting like how do I do supercomputer-like parallel activities on regular PCs or solve operational issues. It was just about physically putting a bunch of random parts into a rack on a low budget.

          But we won already... now, the mainstream is commodity rack parts. You should put the money towards modern 1U nodes rather than a bunch of low volume and high cost chassis parts to try to assemble your frankenr

          • Re:Imagine (Score:5, Insightful)

            by CanHasDIY ( 1672858 ) on Tuesday March 12, 2013 @02:48PM (#43152083) Homepage Journal

            You should put the money towards modern 1U nodes rather than a bunch of low volume and high cost chassis parts to try to assemble your frankenrack of used equipment.

            Methinks you've missed the key purpose of using old equipment one already owns...

            • by Cramer ( 69040 )

              No, we haven't. What you and many others (including the poster) miss is how much time and effort -- and yes, money -- will go into building this custom, already obsolete, cluster. His first mistake is keeping Dell's heat tower and fan -- that's designed for a DESKTOP where you need a large heatsink so a slow (quiet) fan can move enough air to keep it cool; in a rack cluster, that's not even remotely a concern. (density trumps noise)

              (I'm in the same boat -- as I'm sure everyone else is. I have stacks of o

              • Since this is obviously a 'pet' project, i.e. something he's doing just to see if it can be done, time and effort costs don't really factor in, IMO. Like when I work on my own truck, I don't say, "it cost me $300 in parts and $600 in labor to fix that!"

                His first mistake is keeping Dell's heat tower and fan -- that's designed for a DESKTOP where you need a large heatsink so a slow (quiet) fan can move enough air to keep it cool; in a rack cluster, that's not even remotely a concern. (density trumps noise)

                I find the idea of jury-rigging up a rackmount a bit specious myself... But again, this appears to be a 'can we do it' type project, so I don't feel compelled to criticize like I would if he were trying to do this with some mission-critical system.

              • Hell, this is still done in Unis. I used to run a test cluster for my Uni's chem dept that was basically retired lab machines on home depot wire shelving in our machine room. The only thing that cost money for the dept was the headnode (which as used for staging jobs for the big clusters too, and as a file server for storing job output), the Procurve switches, and my time I suppose too. It was a useful cluster for testing things before they got run on big clusters where time was more metered and for getting
                • by Cramer ( 69040 )

                  Again, you miss the point. You did it "right"... took the old machines from desks and sat them on a shelf. Translation: the absolute minimum amount of time and effort. The poster is taking the dell optiplex's apart to make a "google cluster" (i.e. motherboard bolted to a sheet) thus, making them take (marginally) less space. He's putting in a whole lot of work for very little gain.

                  (For the record, I've built clusters on the uber-cheap using 1U (quad-core opteron) rack mount servers from ebay sellers. A

        • I remember when nerds here were willing to do all kinds of crazy things, even if they were not a good long term solution. Maybe we all just grew old and crotchety or something :P

          Actually, I suspect it is the young risk-averse special snowflakes that are all saying to throw money at it. D'oh.

      • Re: (Score:2, Funny)

        by Anonymous Coward

        Back then, people read slashdot at -1, nested, and laughed at the trolls. Right now, I wouldn't be surprised if I'm modded -1 within about 15 minutes by an editor with infinite mod points. Post something the group-think disagrees with, get downmodded. Post something anonymous, no one will read it. Post something mildly offensive, get downmodded.

        We didn't have fucking flags back then and the editors didn't delete posts. Now they do. Fuck what this site has become.

    • by Anonymous Coward

      It's been a long time since "Imagine a beowulf cluster of those!" made any degree of sense, or even appeared on ./.

      Natalie Portman's Hot Gritts to you!

  • Don't do it (Score:5, Insightful)

    by damn_registrars ( 1103043 ) <damn.registrars@gmail.com> on Tuesday March 12, 2013 @12:54PM (#43150975) Homepage Journal
    Seriously, it isn't worth your effort - especially if you want something reliable. People who set out to make homemade clusters find out the hard way about design issues that reduce the life expectancy of their cluster. There are professionals who can build you a proper cluster for not a lot of money if you really want your own, or even better you can rent time on someone else's cluster.
    • Re: (Score:3, Insightful)

      Get an older, CUDA-capable card and have your whoever write code for it instead. I doubled all my SETI work units over 10 years in just 2 weeks. A CPU is just a farmer throwing food to the racehorse nowadays.

      • by eyegor ( 148503 )

        GPU-based computing's a great idea, but not appropriate for all problems. There's also significantly more work managing memory and all that with a GPU.

        We have about 50 M2070 GPUs in production and virtually no one uses them. They depend instead on our CPU resources since they're easier to program for.

        • by caferace ( 442 )
          I just left a place that had over 100 M2075's used for R&D and test dualed up and all on the same network.... During the day, they were mostly real hot, all day. Some jobs ran overnight or the weekend. As they say, YMMV.
        • by hazeii ( 5702 )

          If no-one's using the M2070s, a project like Einstein@home [uwm.edu] certainly could.

    • Re:Don't do it (Score:5, Insightful)

      by Anonymous Coward on Tuesday March 12, 2013 @01:22PM (#43151277)

      Seriously, it isn't worth your effort - especially if you want something reliable. People who set out to make homemade clusters find out the hard way about design issues that reduce the life expectancy of their cluster. There are professionals who can build you a proper cluster for not a lot of money if you really want your own, or even better you can rent time on someone else's cluster.

      If the goal of this is reliable performance, you're absolutely right. But if the goal is to teach yourself about distributed computing, networking, diskless booting, all the issues that come up in building a cluster, on the cheap - then this is a great idea. Just don't expect much from the end product - you'll get more performance a modern box with 10s of cores on a single MB.

    • by Anonymous Coward

      I agree with this poster. After building a homebrew HPC environment and then working with a vendor engineered solution, I can tell you that taking old hardware is really not worth it other than a learning exercise. But never the less, building it would be fun, just not practical. So from a learning perspective, knock yourself out.

      From a pragmatic point of view, the hardware is old, and not very efficient in terms of electricity. Also considering that a single TESLA card can deliver anywhere from 2 to 4

    • Nonsense! Home-built cluster can be cheap and very educational. http://helmer.sfe.se/ [helmer.sfe.se]
      • Nonsense!

        Home-built cluster can be cheap and very educational.

        http://helmer.sfe.se/ [helmer.sfe.se]

        Perhaps as a hobby. But generally they aren't cost effective if you're paying the labor for someone to implement it with 5-year old hardware in a cluster-fuck (pun intended) , jam it in a rack haphazard arrangement and there isn't even a clear need or requirement for it.

    • by stymy ( 1223496 )
      Also, always calculate Ghz/Watt or whatever, as newer processors are more efficient, and new processors can sometimes pay for themselves pretty fast over time due to a lower electricity bill.
    • by dbIII ( 701233 )
      It's actually not all that hard anymore. It's not just that it isn't that difficult to do from scratch, there are also distros like ROCKS designed to run with almost no configuration.
      As for the hardware side, so long as you stick to nothing more exotic than gigabit copper it's not hard. Taking things out of their chassis like the poster suggests is asking for a bit of trouble since they are designed to channel the incoming air, so that would need ugly measures like large diameter fans to force air through
    • Seriously, it isn't worth your effort - especially if you want something reliable.

      Huh? Maybe they just want to do it because it is possible? Kind of like climbing a mountain...

      There are professionals who can build you a proper cluster

      And how did those professionals become professionals? How did they learn the pitfalls about building clusters? What happened to learning something yourself because it is possible?

      *sigh* Just like the "scientists" who recently discovered that people with different values have... different values. I despair.

  • don't rule out (Score:5, Insightful)

    by v1 ( 525388 ) on Tuesday March 12, 2013 @12:55PM (#43150981) Homepage Journal

    throwing gear away or giving it away. Just because you have it doesn't mean to have to, or should use it. If energy and space efficiency are important, you need to carefully consider what you are reusing. Sure, what you have now may have already fallen off the depreciation books, but if it's going to draw twice the power and take double the space that newer used kit would, it may not be the best option, even when the other options involve purchasing new or newer-used gear.

    Not saying you need to do this, just recommending you keep an open mind and don't be afraid to do what needs to be done if you find it necessary.

    • Re:don't rule out (Score:5, Interesting)

      by eyegor ( 148503 ) on Tuesday March 12, 2013 @01:18PM (#43151227)

      Totally agree. We had a bunch of dual dual-core server blades that were freed up and after looking at the power requirements per core for the old systems we decided it would be cheaper in the long run to retire the old servers and buy a fewer number of higher density servers.

      The old blades drew 80 watts/core (320 watts) and the new ones which had dual sixteen-core Opterons drew 10 watts/core for the same amount of overall power. That's a no brainer when you consider that these systems run 24/7 with all CPUs pegged. More cores in production means your jobs finish up faster, you'll be able to have more users and more jobs running and use much less power in the long run.

    • Re:don't rule out (Score:5, Insightful)

      by nine-times ( 778537 ) <nine.times@gmail.com> on Tuesday March 12, 2013 @01:28PM (#43151361) Homepage

      I agree. I've been doing IT for a while now, and this is the kind of thing that *sounds* good, but generally won't work out very well.

      Tell me if I'm wrong here, but the thought process behind this is something like, "well we have all this hardware, so we may as well make good use out of it!" So you'll save a few hundred (or even a few thousand!) dollars by building a cluster of old machines instead of buying a server appropriate for your needs.

      But let's look at the actual costs. First, let's take the costs of the additional racks, and any additional parts you'll need to buy to put things together. Then there's the work put into implementation. How much time have you spent trying to figure this out already? How many hours will you put into building it? Then troubleshooting the setup, and tweaking the cluster for performance? Now double the amount of time you expect to spend, since nothing ever works as smoothly as you'd like, and it'll take at least twice as long as you expect.

      That's just startup costs. Now factor in the regular costs of additional power and AC. Then there's the additional support costs from running a complex unsupported system, which is constructed out of old unsupported computer parts with an increased chance of failure. This thing is going to break. How much time will you spend fixing it? What additional parts will you buy? Will there be any loss of productivity when you experience down-time that could have been avoided by using a new, simple, supported system? What's the cost of that lost productivity?

      That's just off the top of my head. There are probably more costs than that.

      So honestly, if you're doing this for fun, so that you can learn things and experiment, then by all means have at it. But if you are looking for a cost-effective solution to a real problem, try to take an expansive view of all the costs involved, and compare *all* of the costs of using old hardware vs. new hardware. Often, it's cheaper to use new hardware.

      • Re:don't rule out (Score:5, Insightful)

        by Farmer Pete ( 1350093 ) on Tuesday March 12, 2013 @01:43PM (#43151521)
        But you're missing the biggest reason to do this...The older hardware is already purchased. New hardware would be an additional expense that requires an approval/budgeting process. Electricity costs lots of money, but depending on the company, that probably isn't directly billed to the responsible department. Again, it's hard to go to your management and say that you want them to spend X thousand dollars so that they will save X thousand dollars that they don't think they need to spend in the first place.
        • by Anonymous Coward

          While I don't agree that this project is a GOOD idea. This! A thousand times, THIS!

          I just spent 6 months convincing the managemetn here that we can update our 7 year old servers with 50% less equipment, save 75% on power and cooling and pay for the project in about 18 months without mentioning that our userbase/codebase has grown to the point that we are paying people to stare at screens.

          Read this article about the Titan upgrade to Oak Ridge Supercomputer http://www.anandtech.com/show/6421/inside-the-tita

      • Re:don't rule out (Score:5, Insightful)

        by i.r.id10t ( 595143 ) on Tuesday March 12, 2013 @01:47PM (#43151553)

        On the other hand, depending on what kind of courses you teach (tech school, masters degree comp sci, etc) keepign them around for *students* to have experience building a working cluster and then programming stuff to run parallel on them may be a good idea. Of course, this means the boxes wouldn't be running 24/7/365 (more likely 24/7 for a few weeks per term) so the power bill won't kill you, and it could provide valuable learning experience for students... especially if you have them consider the power consumption and ask them to write a recommendation for a cluster system.

        • That's the best idea so far. A few machines that students can trash are invaluable. If the students spend most of the time tearing it down, and rebuilding the cluster, it's not going to use much power.

          You could have them predict what changing CPU/Memory/Interconnect will do to performance, then make them try it out. Put some *Science* into Computer Science.
    • Re:don't rule out (Score:5, Interesting)

      by ILongForDarkness ( 1134931 ) on Tuesday March 12, 2013 @01:29PM (#43151373)

      Great point. Back in the day I worked on a SGI Origin mini/supercomputer (not sure if it qualifies 32 way symmetric multiprocessor still kind of impressive now a days I guess (even a 16 way Opteron isn't symmetric I don't think). Anyways at the time (~2000) there were much faster cores out there. Sure we could use this machine for free for serial load (yeah that is a waste) but we had to wait 3-4X as long as a modern core. You ended up having to ssh in to start new jobs in the middle of the night so you didn't waste an evening of runs versus getting 2-3 in during the day and firing off the fourth before you go to bed. Add to that the IT guys had to keep a relatively obscure system around, provide space and cooling for this monster etc they would have been better just buying us 10 ~1Ghz at the time I guess dual socket workstations.

    • Agreed. Once the OP calculates the TCO of the system, it might turn out that the free stuff might not be worth it. First you should find someone who has done something similiar before. Then you can start from the actual bottlenecks and play out some alternative scenarios.
      What requirements do your calculations have? CPU vs I/O? The TDP of an e8000, 65W, is not bad - this puts your presumed rack short of the 2kW range. How much would that electricity cost for you in a year? If your calculations are I/O bound,

  • by Anonymous Coward

    1. buy malware at a shady virus exchange to create a beowulf botnet
    2. ???
    3. profit!!!

  • I thought some folks had switch to GPUs for heavy number-chrunching... Though the custom hardware setups no doubt renders this a moot point.

    Glad I could help :\
    • That's because it's a hell of a lot faster to use a GPU. The problem is that a decent GPU uses a lot more power than those PSUs can probably support, but even a semi-proficient GPU may be a wise investment.
    • by dbIII ( 701233 )
      They are very much memory limited so don't work for everything. However the biggest showstopper in most cases is when somebody wrote some cool code then left, thus it's stuck on whatever platform it was compiled for. Intel have recently brought out a x86 type of highly parallel GPU type card to catch this market.
  • by eyegor ( 148503 ) on Tuesday March 12, 2013 @01:07PM (#43151115)

    You'll need to consider how you're going to provision and maintain a collection of systems.

    Our company currently uses the ROCKS cluster distribution, which is a CentOS-based distribution that provisions, monitors and manages all of the compute nodes. It's very easy to have a working cluster set up in a short amount of time, but it's somewhat quirky in that you can't fully patch all pieces of the software without breaking the cluster.

    One thing that I really like about ROCKS is their provisioning tool which is called the "Avalanche Installer". It uses bittorrent to load the OS and other software on each compute node as it comes online and it's exceedingly fast.

    I installed ROCKS on a head node, then was able to provision 208 HP BL480c blades within an hour and a half.

    Check it out at www.rockclusters.org

    • by clark0r ( 925569 )
      How does this play with SGE / OGE? Can you centrally configure each node to mount a share? How about install a custom kernel, modules, packages, infiniband config and Lustre mount? If it can do these then it's going to be useful for real clusters.
      • by eyegor ( 148503 )

        It comes with a pretty recent version of SGE and openmpi installed. It's fully capable of using NFS shares and many people have used it with Infiniband. Cluster monitoring's done with ganglia. The kernel's customizable and you can add your own modules as "rolls" and can manage packages either as a post install or build it into the kickstart for each node. We use Isilon for our shared storage, but we're probably going to be setting up a gluster storage cluster too.

        Rocks is a great way for an organization to

    • by Anonymous Coward

      Correct website is -> www.rocksclusters.org

    • by pswPhD ( 1528411 ) on Tuesday March 12, 2013 @02:53PM (#43152147) Homepage

      I can recommend Rocks as well, although you WILL need the slave nodes to have disks in them (you could scrounge some ancient 40Gb drives from somewhere...) you seem to want hardware information so...

      First point is to have all the fans pointing the same way. Large HPC's arrange cabinets back-to-back, so you have a 'hot' corridor and a 'cold' corridor, which enables you to access both sides of the cabinet and saves some money on cooling.
      My old workplace had two clusters and various servers in an air conditioned room, with all the nodes pointing the back wall. probably similar to what you have.
      Don't know anything about the UPS, but I would assume having it on the floor would be OK.

      Good luck with your project. Write a post in the future telling us how it goes.

  • Really? (Score:5, Funny)

    by Russ1642 ( 1087959 ) on Tuesday March 12, 2013 @01:07PM (#43151119)
    Slashdotters only imagine building Beowulf clusters. This is the first time anyone's been serious about it.
  • by MetricT ( 128876 ) on Tuesday March 12, 2013 @01:09PM (#43151129)

    I've been working in academic HPC for over a decade. Unless you are building a simple 2-3 node cluster to learn how a cluster works (scheduler, resource broker and such things), it's not worth your time. What you save in hardware, you'll lose in lost time, electricity, cooling, etc.

    If you're interested in actual research, take one computer, install an AMD 7950 for $300, and you will almost certainly blow the doors off a cluster cobbled from old Core 2 Duo's, and you'll save more than $300 in electricity.

    • Absolutely right about HPC users. Unless you are a gluten for punishment generally you need to get results fast before you know what is next. So users will avoid your cluster nodes because they can get 2-3X the speed from a modern desktop. What you will get is the people that have an endless queue of serial jobs (been there my last computational project was about 250,000 CPU hours of serial work) but generally you'll have a lot of idle time. People will fire off a job and it will finish part way through the

      • Re: (Score:2, Informative)

        by Anonymous Coward

        I'm a glutton for correcting grammar mistakes, and I believe you meant to use the word "glutton" where you used the word "gluten." Gluten is a wheat based protein, and a glutton is someone that exhibits a desire to overeat.

      • Unless you are a gluten for punishment.

        If you're anything like my wife, gluten is punishment.

        Thank you, thank you, I'll be here all week! Enjoy the veal!

    • by serviscope_minor ( 664417 ) on Tuesday March 12, 2013 @02:38PM (#43151991) Journal

      I've been working in academic HPC for over a decade. Unless you are building a simple 2-3 node cluster to learn how a cluster works (scheduler, resource broker and such things), it's not worth your time. What you save in hardware, you'll lose in lost time, electricity, cooling, etc.

      I strongly disagree. I actually had a very similar Ask Slashdot a while back.

      The clustre got built, and has been running happily since.

      If you're interested in actual research, take one computer, install an AMD 7950 for $300, and you will almost certainly blow the doors off a cluster cobbled from old Core 2 Duo's, and you'll save more than $300 in electricity

      Oh yuck!

      But what you save in electricity, you'll lose in postdoc/developer time.

      Sometimes you need results. Developing for a GPU is slow and difficult compared to (e.g.) writing prototype code in MATLAB/Octave. You save heaps of vastly more expensive person and development time by being able to run those codes on a cluster. And also, not every task out there is even easy to put on a GPU.

      • by MetricT ( 128876 ) on Tuesday March 12, 2013 @02:51PM (#43152105)

        You *do* know that Matlab has been supporting GPU computing for some time now? We bought an entire cluster of several hundred nVidia GTX 480's for the explicit purpose of GPU computing.

        • You *do* know that Matlab has been supporting GPU computing for some time now?

          Yes, but only for specific builtins. If you want to do something a bit more custom, it goes back to being very slow.

          • I see this as being similar to using a WYSIWYG HTML editor. It's much faster to use Dreamweaver (or whatever the new favourite is) to get your site up and pretty, but it's also limiting and pumps out bloated and inefficient code. It's much more elegant and streamlined to code yourself, as well as more configurable, but it takes more time.

            You do your due diligence, you pick the best solution for you. Same with parallel computing. Same with everything.
            • Same with parallel computing. Same with everything.

              Sure. Ideally, you'd know in advance exactly what jobs you want to run and tailor the hardware to the jobs. Less ideally, you'd have a general idea of the class of jobs.

              I would advocate a cluster of PCs in general. Firstly because it's not easy to GPUify many jobs, and if you can GPUify it then for a few hundred dollars per node, you can upgrade the cluster to be a GUP cluster.

              The nice thing is that schedulers generally understand that machines have differe

  • by Mysticalfruit ( 533341 ) on Tuesday March 12, 2013 @01:14PM (#43151189) Homepage Journal
    I'm routinely mounting things in a 42U cabinets that ought not be mounted in them, so I've got *some* insight.

    The standard for airflow is front to back and upwards. Doing some sticky note measurements, I think you could mount 5 of these vertically as a unit. I'd say get a piece of 1" think plywood and dado cut channels 1/4" top and bottom to mount the motherboards. This would also give you a mounting spot that you could line up the power supplys in the back. This would also put the Ethernet ports at the back. Another thing this would allow would be for easy removable of a dead board.

    Going on this idea, you could also make these as "units" and install two of them two deep in the cabinet (if you used L rails).

    Without doing any measuring, I'm suspecting this would get you 5 machines for 7U or 10 machines if you did 2 deep in 7U.
    • by hawguy ( 1600213 )

      The standard for airflow is front to back and upwards. Doing some sticky note measurements, I think you could mount 5 of these vertically as a unit. I'd say get a piece of 1" think plywood and dado cut channels 1/4" top and bottom to mount the motherboards. This would also give you a mounting spot that you could line up the power supplys in the back. This would also put the Ethernet ports at the back. Another thing this would allow would be for easy removable of a dead board.

      That sounds like a fire hazard, not to mention a source of dust - do people really put wooden shelves in their datacenters?

      • That sounds like a fire hazard, not to mention a source of dust - do people really put wooden shelves in their datacenters?

        The autoignition temperature for generic cheapo plywood is somewhere on the order of 300 degrees C. If you went with pine, which is still pretty cheap, it goes up to 427 degrees C.

        How hot do you think computers run?

        The dust, I could give you, if the wood used was cheap chipboard, balsa, or something else soft. Something even moderately hard like pine it wouldn't be a problem, as long as you properly cleaned off the sawdust from the cutting process. If you went all out and used oak, it's probably harder th

  • by plus_M ( 1188595 ) on Tuesday March 12, 2013 @01:14PM (#43151195)
    What do you intend to use for inter-node communication? Gigabit ethernet? You need to realize that latency in inter-node communication can cause *extremely* poor scaling for non-trivial parallelization. Scientific computing clusters typically use infiniband or something like it, which has extremely slow latency, but the equipment will cost you a pretty penny. If you are interested in doing computations across multiple computing nodes, you should really setup just two nodes and benchmark what kind of speed increase there is between running the job on a single node and on two nodes. My guess is that you are going to get significantly less than a 2x speedup. It is entirely possible that the calculation will be *slower* on two nodes than on just one. Of course, if you are just running a massive number of unrelated calculations, then inter-node communication becomes much less important, and this won't be an issue.
    • by plus_M ( 1188595 )
      And of course by "slow latency" I mean "low latency".
      • Actually - by slow latency you mean high latency. High latency is bad like slow bandwidth is bad. You want the lowest latency numbers that you can afford. I know of people that count the speed of light going down a cable in their latency calculations because it matters to them (~ 5uSec/m)
        • I think the point was that they had made a typo in saying "slow latency" and really meant to type "low". But thanks for explaining exactly what they didn't mean.
  • by Peter Simpson ( 112887 ) on Tuesday March 12, 2013 @01:19PM (#43151233)
    It may initially seem like a good idea, but if the population isn't homogeneous, you could find your time eaten up looking for spares. With a single type of PC, a node can be sacrificed to keep others running. But these are systems near the end of their design lifetime (and loaded with dust -- and who knows what else?) so components (fans, HDDs, power supplies) are going to be starting to fail more frequently. And the rats' nest of power cables! Perhaps a bunch of multiprocessor, multicore server blades would be a better choice? They go pretty cheaply, and you'd get more cores per power supply, and use less floor space to boot, by rack mounting them.

    Scientific American article: http://www.scientificamerican.com/article.cfm?id=the-do-it-yourself-superc [scientificamerican.com]
  • by MerlynEmrys67 ( 583469 ) on Tuesday March 12, 2013 @01:21PM (#43151265)
    There is a reason that old hardware should be gotten rid of. Depending on the exact config of the 14 servers (processor/whatever) you could probably replace them with 1, maybe 2 servers. The current generation of Jefferson Pass servers hold 4 servers in a 2U sled - so you could replace this whole thing with a 2U solution that isn't exposed the elements like you are proposing. It would be new, under warranty and faster than all get out.

    Your solution will take 14 servers, connect them with ancient 1GbE interconnect and hope for the best. The interconnect for clusters REALLY matters, many problems are network bound - and not only network bound but latency bound as well. Look at the list of fastest supercomputers and you will barely see Ethernet anymore (especially at the high end) and definitely not 1GbE. Your new boxes will probably come with 10GbE that will definitely help... Especially since there will be fewer nodes to have to talk to (only 2, maybe 4)

    The other problem that you will run into is your system will take about 20x the power and 20x the air conditioning bill (yeah - that is a LOT of power there), the modern new system will pay for itself in 9-12 months (and that doesn't include the tax deduction for donating the old systems and making them Someone Else's Problem)

    Recycling old hardware always seems like fun. At the end of a piece of hardware's life cycle look at what it will actually cost to keep it in service - Just the electricity bill will bite you hard, then you have the maintenance, and fun reliability problems.

  • by Brit_in_the_USA ( 936704 ) on Tuesday March 12, 2013 @01:26PM (#43151331)
    SPECfp2006 rate results:
    e8600 34
    i7-3770 130
    x4 the performance

    ...sell the E8xxx series PC's in boxes for$100 a peice with windows licence
    and use the $1400 towards buying Qty.4 lga1155 motherboards (4x$80), 4 unlocked K series i7's (4x$230) and 4x8Gb of DDR3 RAM (4x$40), 4x ~3-400W budget power supplies (4x $30) = $1520

    Use a specialized clustering OS (linux) and have a smaller, easier to manage system, with lots more DDR 3 memory and lower electricity (and AC electricity) bill....
  • by bobbied ( 2522392 ) on Tuesday March 12, 2013 @01:37PM (#43151455)

    Unless you have a large number of identical machines capable of PXE booting and the necessary network hardware to wire them all together, you are really just building a maintenance nightmare. It might be fun to play with a cluster, but you'd do better to buy a couple of machines with as many cores as you can. It will take less space, less power, less fumbling around with configurations, less time and likely be cheaper than trying to cram all the old stuff into some random rack space.

    If you insist on doing this, I suggest the following. 1. Only use *identical* hardware. (Or at least hardware that can run on exactly the same kernel image, modules and configurations) with the maximum memory and fastest networks you can. 2. Make sure you have well engineered Power supplies and cooling. 3. PXE boot all but one machine and make sure your cluster "self configures" based on the hardware that shows up when you turn it on because you will always have something broken. 4. Don't use local storage for anything more than swap, everything comes over the network... 5. Use multiple network segments, split between storage network and operational network.

    By the way... For the sake of any local radio operations, please make sure you don't just unpack all the hardware from it's cases and spread it out on the work bench. Older hardware can be really big RFI generators. Consider keeping it in a rack that offers at least some shielding.

  • Would it be best to orient the shelves (and thus the fans) in the same direction throughout the cabinet, or to alternate the fan orientations on a shelf-by-shelf basis?

    Keep them all the same, so that the system works as one big fan, pulling cool air from one side of the cabinet and exhausting hot air from the other. It's easiest to visualize if you imagine the airflow with a simple scenario. Imagine you had all of the even numbered shelves facing backward, blowing hot air to the front of the rack, while all the odd numbered shelves were trying to suck cool air from the front. That would totally fail because the odd numbered shelves would be sucking in hot air blown out

  • ave a 2 ton (24000 BTU) air-conditioner which will be able to maintain a cool room temperature (the lab is quite small)

    1 BTU is 0.29 watt/hour. So take your total power usage and multiply by three. That's how many BTU of heat the rack will diisipate (all power eventually turns to heat). That's how much ADDITIONAL cooling you'll need beyond what's already used to keep the room cool.

  • Messing with old hardware to try and make it rack mountable? Pfft. Save the effort. Buy a few mid-range servers and you'll get similar compute performance compared to that energy hog of a cluster. If you really want to use that hardware, don't remount it. Just stack the servers in a corner, plug them in, and install ROCKS. It's still gonna be an energy hog and have crappy performance though.
  • Raspberry Pi.

    http://www.tomshardware.com/news/Raspberry-Pi-Supercomputer-Legos-Linux,17596.html

  • Why not give them away and buy 2 i7 26xx or better CPU's for the same performance? You could fit that in 1U instead of a 42U rack. No switch required, smaller UPS required, less aircon load, less electricity.

  • Check out the Microwulf work. It's not necessarily what you're looking for, but the community has produced some creative custom cases/racks. It might give you some fresh ideas.
  • by Anonymous Coward on Tuesday March 12, 2013 @03:17PM (#43152379)

    We have a cluster at my lab that's pretty similar to what the submitter describes. Over the years, we've upgraded it (by replacing old scavenged hardware with slightly less old scavenged hardware) and it is now a very useful, reasonably reliable, but rather power-hungry tool.

    Thoughts:

    - 1GbE is just fine for our kind of inherently parallel problems (Monte Carlo simulations of radiation interactions). It will NOT cut it for things like CFD that require fast node-to-node communication.

    - We are running a Windows environment, using Altair PBS to distribute jobs. If you have Unix/Linux skills, use that instead. (In our case, training grad students on a new OS would just be an unnecessary hurdle, so we stick with what they already know.)

    - Think through the airflow. Really. For a while, ours was in a hot room with only an exhaust fan. We added a portable chiller to stop things from crashing due to overheating; a summer student had to empty its drip bucket twice a day. Moving it to a properly ventilated rack with plenty of power circuits made a HUGE improvement in reliability.

    - If you pay for the electricity yourself, just pony up the cash for modern hardware, it'll pay for itself in power savings. If power doesn't show up on your own department's budget (but capital expenses do), then by all means keep the old stuff running. We've taken both approaches and while we love our Opteron 6xxx (24 cores in a single box!) we're not about to throw out the old Poweredges, or turn down less-old ones that show up on our doorstep.

    - You can't use GPUs for everything. We'd love to, but a lot of our most critical code has only been validated on CPUs and is proving very difficult to port to GPU architectures.

    (Posting AC because I'm here so rarely that I've never bothered to register.)

  • by ArhcAngel ( 247594 ) on Tuesday March 12, 2013 @03:36PM (#43152573)
    Go ask the guys over at Microwulf. [calvin.edu] They appear to have licked this particular challenge and link to others who have as well.
  • Racks are built for air flow from front to back, you'll need to turn the boards 90 unless you remove the side panels... No, you do not want to alternate airflow, you want a hot side and a cool side, it makes cooling easier. If you can, try to vent the hot air out instead of cooling it down, it is cheaper than cooling it down. Btw. did you consider putting 4 or 5 boards vertically in 2 rows behind each other ?
  • Actual experience (Score:3, Interesting)

    by Anonymous Coward on Tuesday March 12, 2013 @07:58PM (#43155063)

    I've done this. Starting with a couple of racksful of PS/2 55sx machines in the late '90s and continuing on through various iterations, some with and some without budgets. I currently run an 8-member heterogenous cluster at home (plus file server, atomic clock, and a few other things), in the only closet in the house that has its own AC unit. It's possible I know something about what you're doing.

    Some of what I'll mention may involve more (wood) shop or electrical engineering than you want to undertake.

    My read of your text is that there is a computer lab that will be occupied by people that will also contain this rack with dismounted Optiplex boards and P/Ss. This lab has an A/C unit that you believe can dissipate the heat generated by new lab computers, occupants, these old machines in the rack, and the UPSs. I'll take your word, but be sure to include all the sources of heat in your calculation, including solar thermal loading if, like me, you live in "the hot part of the country". Unfortunately, this eliminates the cheapest/easiest way of moving heat away from your boards -- 20" box fans (e.g. http://www.walmart.com/ip/Galaxy-20-Box-Fan-B20100/19861411 ) mounted to an assembly of four "inward pointing" boards. These can move somewhat more air than 80 mm case fans, especially as a function of noise. One of the smartest thermal solutions I've ever seen tilted the boards so that the "upward slope" was along the airflow direction -- the little bit of thermal buoyancy helped air arriving at the underside of components to flow uphill and out with the rest of the heated air. I.e., this avoided a common problem of unmodeled airflow systems of having horizontal surfaces that trapped heated air and allowed it to just get hotter and hotter.

    Nevertheless, the best idea is to move the air from "this side" to "that side" on every shelf. Don't alternate directions on successive shelves. If you're actually worried about EMI, then you must have an open sided rack (or you shouldn't be worried). One option is to put metal walls around it, which will control your airflow. Another option that costs $10 is to make your own Faraday cage panels however you see fit. (I've done chicken wire and I've done cardboard/Al foil cardboard sandwiches. Both worked.)

    You should probably consider dual-mounting boards to the upper *and* lower sides of your shelves. Another layout I've been very happy with is vertical board mounts (like blades) with a column of P/Ss on the left or right.

    A *really* good idea for power distribution is to throw out the multiple discrete P/Ss and replace them with a DC distribution system. There's very little reason to have all those switching power supplies running to provide the same voltages over 6 feet. The UPSs are the heaviest thing in your setup; putting them at the bottom of the rack is probably a good idea. They generate some heat on standby (not much) and a lot more when running. Of course, when they're running, the AC is (worst case) also off and at least one machine should have gotten the "out of power" message and be arranging for all the machines to "shutdown -h now".

    You only plan on having two cables per machine (since your setup seems KVM-less and headless), so wire organization may not be that important. (Yes, there are wiring nazis. I'm not one.) Pick Ethernet cables that are the right length (or get a crimper, a spool, and a bag of plugs and make them to the exact length). You'll probably get everything you need from 2-sided Velcro strips to make retaining bands on the left and right columns of the rack. Label both ends of all cables. Really. Not kidding. While you're at it, label the front and back of every motherboard with its MAC(s) and whatever identifiers you're using for machines.

  • Functioning computer systems are rarely useless; the E8000 systems the OP has will run software just like they did a few years ago when they were purchased. The most important question is: what do you want this cluster to do? If you want the experience of building it, including solving the HW issues of racking and stacking, and the software issues of cluster management software, job scheduling and resource management, then don't throw the equipment away. There are many opportunities for making decisions t
  • Unless you're specifically undertaking this project to learn more about building a cluster, don't build a cluster. Over time it would be cheaper in terms of power, cooling, manpower and space to toss the old equipment and replace it with something more powerful, or better yet just toss everything and spin up cluster resources on a cloud platform as needed. AWS, for example has very good support for cluster computing and can put you in or very near supercomputer territory for $1,000/hr.
  • Pulling the system out of the case seems... odd. Are you that short on space that you can't have another rack?

    Several reasons:
          1. dust
          2. static
          3. a. cooling: real servers have plastic shrouds to guide the air from the fans through the heat sinks. Without that,
                      the cooling won't be anywhere near as good, and possibliy not good enough to keep them from shutting
                      down when they're being run hard.
                  b. DO NOT ALTERNATE directions. In data centers, in server rooms, etc, you have all in a row facing the same
                    way, and blow your cool air towards the front, and let it get somewhat warm behind. This is how they're designed
                    to be used.

    UPSes on the bottom: sure. I've put some in the middle of the rack, but those are rack-mount. MAKE SURE that you leave clearance to open 'em up when you need to replace the batteries.

    NOTE: when you buy replacement batteries for these UPSes, UNDER NO CIRCUMSTANCES BELIEVE ANY MANUFACTURER OR RESELLER. TELL THEM THAT IF THEY DON'T SEND YOU HR - HIGH RATE - BATTERIES, YOU WILL SEND THEM BACK. APC rackmounts WILL NOT ACCEPT *A*N*Y*T*H*I*N*G* but an HR battery, and continue to tell you that you need to replace if, forever.

    I'm assuming you'll be running linux. I'm also assuming that you're using this for heavy duty computing, not load balancing or H/A (high availability).

    For clustering, also check out torque, which is a standard clustering package, though it does need the jobs to be parallel processing aware.

    For the person who mentioned "time" as a cost: I'd assume that the OP was asked to do this "as time permitted", and is certainly something to do that's useful, as opposed to playing solitaire, waiting for something to need work....

                      mark

  • Whatever happened to Plan9?

    If you are serious about this project I would use Plan 9 because it is designed to use all of your hardware transparently. They can always use more members in this small community. You might find this underrated platform quite delightful:

    http://plan9.bell-labs.com/plan9/ [bell-labs.com]

    Ignore all the naysayers. Just having experience with Plan 9 makes this experiment worth it.

"The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts." -- Bertrand Russell

Working...