Ask Slashdot: Building a Cheap Computing Cluster? 160
New submitter jackdotwa writes "Machines in our computer lab are periodically retired, and we have decided to recycle them and put them to work on combinatorial problems. I've spent some time trawling the web (this Beowulf cluster link proved very instructive) but have a few reservations regarding the basic design and air-flow. Our goal is to do this cheaply but also to do it in a space-conserving fashion. We have 14 E8000 Core2 Duo machines that we wish to remove from their cases and place side-by-side, along with their power supply units, on rackmount trays within a 42U (19", 1000mm deep) cabinet." Read on for more details on the project, including some helpful pictures and specific questions.
jackdotwa continues: "Removing them means we can fit two machines into 4U (as opposed to 5U). The cabinet has extractor fans at the top and the PSUs and motherboard fans (which pull air off the CPU and remove it laterally — (see images) face in the same direction. Would it be best to orient the shelves (and thus the fans) in the same direction throughout the cabinet, or to alternate the fan orientations on a shelf-by-shelf basis? Would there be electrical interference with the motherboards and CPUs exposed in this manner? We have a 2 ton (24000 BTU) air-conditioner which will be able to maintain a cool room temperature (the lab is quite small), judging by the guide in the first link. However, I've been asked to place UPSs in the bottom of the cabinet (they will likely be non-rackmount UPSs as they are considerably cheaper). Would this be, in anyone's experience, a realistic request (I'm concerned about the additional heating in the cabinet itself)? The nodes in the cabinet will be diskless and connected via a rack-mountable gigabit ethernet switch to a master server. We are looking to purchase rack-mountable power distribution units to clean up the wiring a little. If anyone has any experience in this regard, suggestions would be most appreciated."
Once you solve the hardware challenges..... (Score:5, Informative)
You'll need to consider how you're going to provision and maintain a collection of systems.
Our company currently uses the ROCKS cluster distribution, which is a CentOS-based distribution that provisions, monitors and manages all of the compute nodes. It's very easy to have a working cluster set up in a short amount of time, but it's somewhat quirky in that you can't fully patch all pieces of the software without breaking the cluster.
One thing that I really like about ROCKS is their provisioning tool which is called the "Avalanche Installer". It uses bittorrent to load the OS and other software on each compute node as it comes online and it's exceedingly fast.
I installed ROCKS on a head node, then was able to provision 208 HP BL480c blades within an hour and a half.
Check it out at www.rockclusters.org
Sounds interesting... (Score:5, Informative)
The standard for airflow is front to back and upwards. Doing some sticky note measurements, I think you could mount 5 of these vertically as a unit. I'd say get a piece of 1" think plywood and dado cut channels 1/4" top and bottom to mount the motherboards. This would also give you a mounting spot that you could line up the power supplys in the back. This would also put the Ethernet ports at the back. Another thing this would allow would be for easy removable of a dead board.
Going on this idea, you could also make these as "units" and install two of them two deep in the cabinet (if you used L rails).
Without doing any measuring, I'm suspecting this would get you 5 machines for 7U or 10 machines if you did 2 deep in 7U.
Inter-node communication (Score:4, Informative)
Re:Just use Amazon AWS (Score:5, Informative)
It's 2013 don't build your own cluster just use AWS EC2 spot instances.
An EC2 "High CPU Medium" instance is probably close to his Core 2 Duo's (it has 1.7GB RAM + two cores of 2.5 EC2 compute units each (each ECU is equivalent to a 2007 era 1.2Ghz Xeon).
Current spot pricing is $0.018/hour, so a month would cost him around $12.96. (not including storage, add about a dollar for 10GB of EBS disk space).
If his computers use 150W of power each, at $0.12/KWh, they'll cost exactly $0.018 -- the same price as an EC2 instance excluding storage.
However spot pricing is not guaranteed, so he'll have to be prepared to shut down his instances when the spot price rises above what he's willing to pay -- full price for the instance is $0.145/hour, but he could get that down to $0.09/hour if he's willing to pay $161 to reserve the instance for 3 years.
sell them and buy new.... (Score:5, Informative)
e8600 34
i7-3770 130
x4 the performance
and use the $1400 towards buying Qty.4 lga1155 motherboards (4x$80), 4 unlocked K series i7's (4x$230) and 4x8Gb of DDR3 RAM (4x$40), 4x ~3-400W budget power supplies (4x $30) = $1520
Use a specialized clustering OS (linux) and have a smaller, easier to manage system, with lots more DDR 3 memory and lower electricity (and AC electricity) bill....
Re:Probably not worth your time (Score:2, Informative)
I'm a glutton for correcting grammar mistakes, and I believe you meant to use the word "glutton" where you used the word "gluten." Gluten is a wheat based protein, and a glutton is someone that exhibits a desire to overeat.
Re:Don't do it (Score:5, Informative)
For general purpose computing, you are correct. It wouldn't be pessimistic at all for one computer to go malfunctioning every week.
Huh? E8000 Core2 Duos are not that old. I've got a rack of a half dozen Pentium IIIs that I've run for years without problems. What kind of crap hardware do you run where you're expecting 1 failure out of 14 machines every week?
This is assuming, of course, that when you set up the cluster in the first place, that you check motherboards for bad caps, loose cooling fans, etc, and discard/repair anything that looks even like it might possibly fail. Considering the effort this guy seems to be going to, that's probably (but I've been wrong about that kind of thing before) a given.
From the pics, these are BTX machines, which in my experience have better cooling than ATX, and are less likely to have overheated, failing caps in the first place.
Re:Probably not worth your time (Score:4, Informative)
You *do* know that Matlab has been supporting GPU computing for some time now? We bought an entire cluster of several hundred nVidia GTX 480's for the explicit purpose of GPU computing.
Re:Once you solve the hardware challenges..... (Score:4, Informative)
I can recommend Rocks as well, although you WILL need the slave nodes to have disks in them (you could scrounge some ancient 40Gb drives from somewhere...) you seem to want hardware information so...
First point is to have all the fans pointing the same way. Large HPC's arrange cabinets back-to-back, so you have a 'hot' corridor and a 'cold' corridor, which enables you to access both sides of the cabinet and saves some money on cooling.
My old workplace had two clusters and various servers in an air conditioned room, with all the nodes pointing the back wall. probably similar to what you have.
Don't know anything about the UPS, but I would assume having it on the floor would be OK.
Good luck with your project. Write a post in the future telling us how it goes.
Re:Don't do it (Score:2, Informative)
How do you check motherboards for bad capacitors?
Bad caps will swell or buldge at the top. Eventually they will leak electrolytes and corrosion will occur on the tops.
FYI the capacitors are the ones shaped like cylinders or tiny soda cans. Sometimes there will be '+' or 'x' perforation on the tops where the swelling usually happens.