Ask Slashdot: Building a Cheap Computing Cluster? 160
New submitter jackdotwa writes "Machines in our computer lab are periodically retired, and we have decided to recycle them and put them to work on combinatorial problems. I've spent some time trawling the web (this Beowulf cluster link proved very instructive) but have a few reservations regarding the basic design and air-flow. Our goal is to do this cheaply but also to do it in a space-conserving fashion. We have 14 E8000 Core2 Duo machines that we wish to remove from their cases and place side-by-side, along with their power supply units, on rackmount trays within a 42U (19", 1000mm deep) cabinet." Read on for more details on the project, including some helpful pictures and specific questions.
jackdotwa continues: "Removing them means we can fit two machines into 4U (as opposed to 5U). The cabinet has extractor fans at the top and the PSUs and motherboard fans (which pull air off the CPU and remove it laterally — (see images) face in the same direction. Would it be best to orient the shelves (and thus the fans) in the same direction throughout the cabinet, or to alternate the fan orientations on a shelf-by-shelf basis? Would there be electrical interference with the motherboards and CPUs exposed in this manner? We have a 2 ton (24000 BTU) air-conditioner which will be able to maintain a cool room temperature (the lab is quite small), judging by the guide in the first link. However, I've been asked to place UPSs in the bottom of the cabinet (they will likely be non-rackmount UPSs as they are considerably cheaper). Would this be, in anyone's experience, a realistic request (I'm concerned about the additional heating in the cabinet itself)? The nodes in the cabinet will be diskless and connected via a rack-mountable gigabit ethernet switch to a master server. We are looking to purchase rack-mountable power distribution units to clean up the wiring a little. If anyone has any experience in this regard, suggestions would be most appreciated."
Imagine (Score:5, Funny)
A beowulf cluster of these! FP
Re: (Score:2)
Re:Imagine (Score:5, Interesting)
Yeah, except back in the 2000's people would be thinking it is a cool idea, and would be at least 4 other people who have recently done it and can give tips.
Now it is just people saying "Meh, throw it away and buy newer more powerful boxes". True, and the rational choice, but still rather bland...
I remember when nerds here were willing to do all kinds of crazy things, even if they were not a good long term solution. Maybe we all just grew old and crotchety or something :P
(Spoken as someone who had a lot of fun building an openmosix cluster from old AMD 1.2GHz machines my uni threw out.)
Re: (Score:1)
The difference is that we take the clustering part for granted now. The question wasn't something interesting like how do I do supercomputer-like parallel activities on regular PCs or solve operational issues. It was just about physically putting a bunch of random parts into a rack on a low budget.
But we won already... now, the mainstream is commodity rack parts. You should put the money towards modern 1U nodes rather than a bunch of low volume and high cost chassis parts to try to assemble your frankenr
Re:Imagine (Score:5, Insightful)
You should put the money towards modern 1U nodes rather than a bunch of low volume and high cost chassis parts to try to assemble your frankenrack of used equipment.
Methinks you've missed the key purpose of using old equipment one already owns...
Re: (Score:1)
No, we haven't. What you and many others (including the poster) miss is how much time and effort -- and yes, money -- will go into building this custom, already obsolete, cluster. His first mistake is keeping Dell's heat tower and fan -- that's designed for a DESKTOP where you need a large heatsink so a slow (quiet) fan can move enough air to keep it cool; in a rack cluster, that's not even remotely a concern. (density trumps noise)
(I'm in the same boat -- as I'm sure everyone else is. I have stacks of o
Re: (Score:3)
Since this is obviously a 'pet' project, i.e. something he's doing just to see if it can be done, time and effort costs don't really factor in, IMO. Like when I work on my own truck, I don't say, "it cost me $300 in parts and $600 in labor to fix that!"
His first mistake is keeping Dell's heat tower and fan -- that's designed for a DESKTOP where you need a large heatsink so a slow (quiet) fan can move enough air to keep it cool; in a rack cluster, that's not even remotely a concern. (density trumps noise)
I find the idea of jury-rigging up a rackmount a bit specious myself... But again, this appears to be a 'can we do it' type project, so I don't feel compelled to criticize like I would if he were trying to do this with some mission-critical system.
Re: (Score:3)
Re: (Score:2)
Again, you miss the point. You did it "right"... took the old machines from desks and sat them on a shelf. Translation: the absolute minimum amount of time and effort. The poster is taking the dell optiplex's apart to make a "google cluster" (i.e. motherboard bolted to a sheet) thus, making them take (marginally) less space. He's putting in a whole lot of work for very little gain.
(For the record, I've built clusters on the uber-cheap using 1U (quad-core opteron) rack mount servers from ebay sellers. A
Re: (Score:2)
I remember when nerds here were willing to do all kinds of crazy things, even if they were not a good long term solution. Maybe we all just grew old and crotchety or something :P
Actually, I suspect it is the young risk-averse special snowflakes that are all saying to throw money at it. D'oh.
Re: (Score:2, Funny)
Back then, people read slashdot at -1, nested, and laughed at the trolls. Right now, I wouldn't be surprised if I'm modded -1 within about 15 minutes by an editor with infinite mod points. Post something the group-think disagrees with, get downmodded. Post something anonymous, no one will read it. Post something mildly offensive, get downmodded.
We didn't have fucking flags back then and the editors didn't delete posts. Now they do. Fuck what this site has become.
Re: (Score:1)
It's been a long time since "Imagine a beowulf cluster of those!" made any degree of sense, or even appeared on ./.
Natalie Portman's Hot Gritts to you!
Re: (Score:2)
Don't do it (Score:5, Insightful)
Re: (Score:3, Insightful)
Get an older, CUDA-capable card and have your whoever write code for it instead. I doubled all my SETI work units over 10 years in just 2 weeks. A CPU is just a farmer throwing food to the racehorse nowadays.
Re: (Score:3)
GPU-based computing's a great idea, but not appropriate for all problems. There's also significantly more work managing memory and all that with a GPU.
We have about 50 M2070 GPUs in production and virtually no one uses them. They depend instead on our CPU resources since they're easier to program for.
Re: (Score:1)
Re: (Score:1)
If no-one's using the M2070s, a project like Einstein@home [uwm.edu] certainly could.
Re: (Score:2)
Unfortunately, most of our clusters are on closed networks.
Re:Don't do it (Score:5, Insightful)
Seriously, it isn't worth your effort - especially if you want something reliable. People who set out to make homemade clusters find out the hard way about design issues that reduce the life expectancy of their cluster. There are professionals who can build you a proper cluster for not a lot of money if you really want your own, or even better you can rent time on someone else's cluster.
If the goal of this is reliable performance, you're absolutely right. But if the goal is to teach yourself about distributed computing, networking, diskless booting, all the issues that come up in building a cluster, on the cheap - then this is a great idea. Just don't expect much from the end product - you'll get more performance a modern box with 10s of cores on a single MB.
Re: (Score:1)
I agree with this poster. After building a homebrew HPC environment and then working with a vendor engineered solution, I can tell you that taking old hardware is really not worth it other than a learning exercise. But never the less, building it would be fun, just not practical. So from a learning perspective, knock yourself out.
From a pragmatic point of view, the hardware is old, and not very efficient in terms of electricity. Also considering that a single TESLA card can deliver anywhere from 2 to 4
Re: (Score:3)
Re: (Score:2)
Nonsense!
Home-built cluster can be cheap and very educational.
http://helmer.sfe.se/ [helmer.sfe.se]
Perhaps as a hobby. But generally they aren't cost effective if you're paying the labor for someone to implement it with 5-year old hardware in a cluster-fuck (pun intended) , jam it in a rack haphazard arrangement and there isn't even a clear need or requirement for it.
Re: (Score:1)
Re: (Score:2)
As for the hardware side, so long as you stick to nothing more exotic than gigabit copper it's not hard. Taking things out of their chassis like the poster suggests is asking for a bit of trouble since they are designed to channel the incoming air, so that would need ugly measures like large diameter fans to force air through
Re: (Score:2)
Seriously, it isn't worth your effort - especially if you want something reliable.
Huh? Maybe they just want to do it because it is possible? Kind of like climbing a mountain...
There are professionals who can build you a proper cluster
And how did those professionals become professionals? How did they learn the pitfalls about building clusters? What happened to learning something yourself because it is possible?
*sigh* Just like the "scientists" who recently discovered that people with different values have... different values. I despair.
Re:Don't do it (Score:5, Informative)
For general purpose computing, you are correct. It wouldn't be pessimistic at all for one computer to go malfunctioning every week.
Huh? E8000 Core2 Duos are not that old. I've got a rack of a half dozen Pentium IIIs that I've run for years without problems. What kind of crap hardware do you run where you're expecting 1 failure out of 14 machines every week?
This is assuming, of course, that when you set up the cluster in the first place, that you check motherboards for bad caps, loose cooling fans, etc, and discard/repair anything that looks even like it might possibly fail. Considering the effort this guy seems to be going to, that's probably (but I've been wrong about that kind of thing before) a given.
From the pics, these are BTX machines, which in my experience have better cooling than ATX, and are less likely to have overheated, failing caps in the first place.
Re: (Score:2)
Re: (Score:2, Informative)
How do you check motherboards for bad capacitors?
Bad caps will swell or buldge at the top. Eventually they will leak electrolytes and corrosion will occur on the tops.
FYI the capacitors are the ones shaped like cylinders or tiny soda cans. Sometimes there will be '+' or 'x' perforation on the tops where the swelling usually happens.
Re: (Score:3)
How do you check motherboards for bad capacitors?
Bulges are bad. Leaks are bad. If the smoke has been released, doubly bad.
Other than that, you have to know what voltage is supposed to be on them, and measure it. If you still suspect something use a scope. Worst case, you have to desolder it, then check it's value and ESR. Mostly, I don't bother, I just replace suspect caps until whatever is working.
Re: (Score:3)
Huh? E8000 Core2 Duos are not that old. I've got a rack of a half dozen Pentium IIIs that I've run for years without problems.
What are you smoking? E8000 Core2 Duos are ancient. These are all 5 year old CPU's. Five years in which Intel has been focusing specifically on better power efficiency which in turn leads to better cooling efficiency all the while improving the number of cores contained in a chip. The 14 E8000 systems which are going to take up 42U of rack space can and should be replaced by a single 1U Dell R620 with 2x E5-2690 processors (8c + hyperthreading), with 8x16GB 1600Mhz ECC DIMMs (which will probably be more mem
Re: (Score:2)
Ditto. The labor alone will kill any perceived saving.
Sell those old boxes for $100 a pop to students and buy something nice. In my experience if you have a Dell Optiplex of that era with the original power supply and motherboard, it's just waiting to die. Check the MB caps. Dell had to replace 2/3rd of the power supplies and 1/2 of the motherboards in the 100 that we owned within the first 2 years.
Re: (Score:2)
Absolutely nowhere in my comment did I state anything about power efficiency. My simple point was reliability.
If you're expecting a 7% failure rate per week, then you run crap hardware. As I said, I've got a half dozen Pentium IIIs, which are at least 3 times as old as these, that have been running flawlessly for years, so how does your setup suck so badly?
Re: (Score:2)
The whole point of the thread was that the old machines would give a high failure rate, of 1/14 per week.
If buying new machines is still going to give a high failure rate, then that's not a reason to upgrade.
I realize power and performance are still valid reasons, but that wasn't where this particular thread was going.
don't rule out (Score:5, Insightful)
throwing gear away or giving it away. Just because you have it doesn't mean to have to, or should use it. If energy and space efficiency are important, you need to carefully consider what you are reusing. Sure, what you have now may have already fallen off the depreciation books, but if it's going to draw twice the power and take double the space that newer used kit would, it may not be the best option, even when the other options involve purchasing new or newer-used gear.
Not saying you need to do this, just recommending you keep an open mind and don't be afraid to do what needs to be done if you find it necessary.
Re:don't rule out (Score:5, Interesting)
Totally agree. We had a bunch of dual dual-core server blades that were freed up and after looking at the power requirements per core for the old systems we decided it would be cheaper in the long run to retire the old servers and buy a fewer number of higher density servers.
The old blades drew 80 watts/core (320 watts) and the new ones which had dual sixteen-core Opterons drew 10 watts/core for the same amount of overall power. That's a no brainer when you consider that these systems run 24/7 with all CPUs pegged. More cores in production means your jobs finish up faster, you'll be able to have more users and more jobs running and use much less power in the long run.
Re:don't rule out (Score:5, Insightful)
I agree. I've been doing IT for a while now, and this is the kind of thing that *sounds* good, but generally won't work out very well.
Tell me if I'm wrong here, but the thought process behind this is something like, "well we have all this hardware, so we may as well make good use out of it!" So you'll save a few hundred (or even a few thousand!) dollars by building a cluster of old machines instead of buying a server appropriate for your needs.
But let's look at the actual costs. First, let's take the costs of the additional racks, and any additional parts you'll need to buy to put things together. Then there's the work put into implementation. How much time have you spent trying to figure this out already? How many hours will you put into building it? Then troubleshooting the setup, and tweaking the cluster for performance? Now double the amount of time you expect to spend, since nothing ever works as smoothly as you'd like, and it'll take at least twice as long as you expect.
That's just startup costs. Now factor in the regular costs of additional power and AC. Then there's the additional support costs from running a complex unsupported system, which is constructed out of old unsupported computer parts with an increased chance of failure. This thing is going to break. How much time will you spend fixing it? What additional parts will you buy? Will there be any loss of productivity when you experience down-time that could have been avoided by using a new, simple, supported system? What's the cost of that lost productivity?
That's just off the top of my head. There are probably more costs than that.
So honestly, if you're doing this for fun, so that you can learn things and experiment, then by all means have at it. But if you are looking for a cost-effective solution to a real problem, try to take an expansive view of all the costs involved, and compare *all* of the costs of using old hardware vs. new hardware. Often, it's cheaper to use new hardware.
Re:don't rule out (Score:5, Insightful)
Re: (Score:1)
While I don't agree that this project is a GOOD idea. This! A thousand times, THIS!
I just spent 6 months convincing the managemetn here that we can update our 7 year old servers with 50% less equipment, save 75% on power and cooling and pay for the project in about 18 months without mentioning that our userbase/codebase has grown to the point that we are paying people to stare at screens.
Read this article about the Titan upgrade to Oak Ridge Supercomputer http://www.anandtech.com/show/6421/inside-the-tita
Re:don't rule out (Score:5, Insightful)
On the other hand, depending on what kind of courses you teach (tech school, masters degree comp sci, etc) keepign them around for *students* to have experience building a working cluster and then programming stuff to run parallel on them may be a good idea. Of course, this means the boxes wouldn't be running 24/7/365 (more likely 24/7 for a few weeks per term) so the power bill won't kill you, and it could provide valuable learning experience for students... especially if you have them consider the power consumption and ask them to write a recommendation for a cluster system.
Re: (Score:2)
You could have them predict what changing CPU/Memory/Interconnect will do to performance, then make them try it out. Put some *Science* into Computer Science.
Re:don't rule out (Score:5, Interesting)
Great point. Back in the day I worked on a SGI Origin mini/supercomputer (not sure if it qualifies 32 way symmetric multiprocessor still kind of impressive now a days I guess (even a 16 way Opteron isn't symmetric I don't think). Anyways at the time (~2000) there were much faster cores out there. Sure we could use this machine for free for serial load (yeah that is a waste) but we had to wait 3-4X as long as a modern core. You ended up having to ssh in to start new jobs in the middle of the night so you didn't waste an evening of runs versus getting 2-3 in during the day and firing off the fourth before you go to bed. Add to that the IT guys had to keep a relatively obscure system around, provide space and cooling for this monster etc they would have been better just buying us 10 ~1Ghz at the time I guess dual socket workstations.
Re: (Score:3)
Re: (Score:3)
Agreed. Once the OP calculates the TCO of the system, it might turn out that the free stuff might not be worth it. First you should find someone who has done something similiar before. Then you can start from the actual bottlenecks and play out some alternative scenarios.
What requirements do your calculations have? CPU vs I/O? The TDP of an e8000, 65W, is not bad - this puts your presumed rack short of the 2kW range. How much would that electricity cost for you in a year? If your calculations are I/O bound,
Easy... (Score:1)
1. buy malware at a shady virus exchange to create a beowulf botnet
2. ???
3. profit!!!
GPUs (Score:2)
Glad I could help
Re: (Score:2)
Re: (Score:2)
Once you solve the hardware challenges..... (Score:5, Informative)
You'll need to consider how you're going to provision and maintain a collection of systems.
Our company currently uses the ROCKS cluster distribution, which is a CentOS-based distribution that provisions, monitors and manages all of the compute nodes. It's very easy to have a working cluster set up in a short amount of time, but it's somewhat quirky in that you can't fully patch all pieces of the software without breaking the cluster.
One thing that I really like about ROCKS is their provisioning tool which is called the "Avalanche Installer". It uses bittorrent to load the OS and other software on each compute node as it comes online and it's exceedingly fast.
I installed ROCKS on a head node, then was able to provision 208 HP BL480c blades within an hour and a half.
Check it out at www.rockclusters.org
Re: (Score:2)
Re: (Score:2)
It comes with a pretty recent version of SGE and openmpi installed. It's fully capable of using NFS shares and many people have used it with Infiniband. Cluster monitoring's done with ganglia. The kernel's customizable and you can add your own modules as "rolls" and can manage packages either as a post install or build it into the kickstart for each node. We use Isilon for our shared storage, but we're probably going to be setting up a gluster storage cluster too.
Rocks is a great way for an organization to
Re: (Score:1)
Correct website is -> www.rocksclusters.org
Re:Once you solve the hardware challenges..... (Score:4, Informative)
I can recommend Rocks as well, although you WILL need the slave nodes to have disks in them (you could scrounge some ancient 40Gb drives from somewhere...) you seem to want hardware information so...
First point is to have all the fans pointing the same way. Large HPC's arrange cabinets back-to-back, so you have a 'hot' corridor and a 'cold' corridor, which enables you to access both sides of the cabinet and saves some money on cooling.
My old workplace had two clusters and various servers in an air conditioned room, with all the nodes pointing the back wall. probably similar to what you have.
Don't know anything about the UPS, but I would assume having it on the floor would be OK.
Good luck with your project. Write a post in the future telling us how it goes.
Re: (Score:2)
My bad: www.rocksclusters.org
Really? (Score:5, Funny)
Probably not worth your time (Score:5, Interesting)
I've been working in academic HPC for over a decade. Unless you are building a simple 2-3 node cluster to learn how a cluster works (scheduler, resource broker and such things), it's not worth your time. What you save in hardware, you'll lose in lost time, electricity, cooling, etc.
If you're interested in actual research, take one computer, install an AMD 7950 for $300, and you will almost certainly blow the doors off a cluster cobbled from old Core 2 Duo's, and you'll save more than $300 in electricity.
Re: (Score:2)
Absolutely right about HPC users. Unless you are a gluten for punishment generally you need to get results fast before you know what is next. So users will avoid your cluster nodes because they can get 2-3X the speed from a modern desktop. What you will get is the people that have an endless queue of serial jobs (been there my last computational project was about 250,000 CPU hours of serial work) but generally you'll have a lot of idle time. People will fire off a job and it will finish part way through the
Re: (Score:2, Informative)
I'm a glutton for correcting grammar mistakes, and I believe you meant to use the word "glutton" where you used the word "gluten." Gluten is a wheat based protein, and a glutton is someone that exhibits a desire to overeat.
Re: (Score:2)
So what if you're a glutton for gluten?
Well, besides the Atkins Diet is probably not for you, that is....
Re: (Score:2)
So what if you're a glutton for gluten?
Then you are a......glutton for gluten?
Re: (Score:2)
Unless you are a gluten for punishment.
If you're anything like my wife, gluten is punishment.
Thank you, thank you, I'll be here all week! Enjoy the veal!
Re:Probably not worth your time (Score:4, Interesting)
I've been working in academic HPC for over a decade. Unless you are building a simple 2-3 node cluster to learn how a cluster works (scheduler, resource broker and such things), it's not worth your time. What you save in hardware, you'll lose in lost time, electricity, cooling, etc.
I strongly disagree. I actually had a very similar Ask Slashdot a while back.
The clustre got built, and has been running happily since.
If you're interested in actual research, take one computer, install an AMD 7950 for $300, and you will almost certainly blow the doors off a cluster cobbled from old Core 2 Duo's, and you'll save more than $300 in electricity
Oh yuck!
But what you save in electricity, you'll lose in postdoc/developer time.
Sometimes you need results. Developing for a GPU is slow and difficult compared to (e.g.) writing prototype code in MATLAB/Octave. You save heaps of vastly more expensive person and development time by being able to run those codes on a cluster. And also, not every task out there is even easy to put on a GPU.
Re:Probably not worth your time (Score:4, Informative)
You *do* know that Matlab has been supporting GPU computing for some time now? We bought an entire cluster of several hundred nVidia GTX 480's for the explicit purpose of GPU computing.
Re: (Score:3)
You *do* know that Matlab has been supporting GPU computing for some time now?
Yes, but only for specific builtins. If you want to do something a bit more custom, it goes back to being very slow.
Re: (Score:2)
You do your due diligence, you pick the best solution for you. Same with parallel computing. Same with everything.
Re: (Score:2)
Same with parallel computing. Same with everything.
Sure. Ideally, you'd know in advance exactly what jobs you want to run and tailor the hardware to the jobs. Less ideally, you'd have a general idea of the class of jobs.
I would advocate a cluster of PCs in general. Firstly because it's not easy to GPUify many jobs, and if you can GPUify it then for a few hundred dollars per node, you can upgrade the cluster to be a GUP cluster.
The nice thing is that schedulers generally understand that machines have differe
Re: (Score:2)
It depends very specifically on the application. There are some fields that are currently tied to nVidia due to "legacy" code (a strange term for code that can't be 1-2 years old) that is written in CUDA. If so, you can buy an equivalent nVidia.
If you're writing your own app (which if they're studying combinatorics seems likely) then rewriting the core loop in OpenCL is reasonable.
OpenCL is a higher-level abstraction, and you do lose some performance compared to CUDA, but it's worth it in my opinion simp
Sounds interesting... (Score:5, Informative)
The standard for airflow is front to back and upwards. Doing some sticky note measurements, I think you could mount 5 of these vertically as a unit. I'd say get a piece of 1" think plywood and dado cut channels 1/4" top and bottom to mount the motherboards. This would also give you a mounting spot that you could line up the power supplys in the back. This would also put the Ethernet ports at the back. Another thing this would allow would be for easy removable of a dead board.
Going on this idea, you could also make these as "units" and install two of them two deep in the cabinet (if you used L rails).
Without doing any measuring, I'm suspecting this would get you 5 machines for 7U or 10 machines if you did 2 deep in 7U.
Re: (Score:2)
The standard for airflow is front to back and upwards. Doing some sticky note measurements, I think you could mount 5 of these vertically as a unit. I'd say get a piece of 1" think plywood and dado cut channels 1/4" top and bottom to mount the motherboards. This would also give you a mounting spot that you could line up the power supplys in the back. This would also put the Ethernet ports at the back. Another thing this would allow would be for easy removable of a dead board.
That sounds like a fire hazard, not to mention a source of dust - do people really put wooden shelves in their datacenters?
Re: (Score:2)
That sounds like a fire hazard, not to mention a source of dust - do people really put wooden shelves in their datacenters?
The autoignition temperature for generic cheapo plywood is somewhere on the order of 300 degrees C. If you went with pine, which is still pretty cheap, it goes up to 427 degrees C.
How hot do you think computers run?
The dust, I could give you, if the wood used was cheap chipboard, balsa, or something else soft. Something even moderately hard like pine it wouldn't be a problem, as long as you properly cleaned off the sawdust from the cutting process. If you went all out and used oak, it's probably harder th
Re: (Score:2)
That sounds like a fire hazard, not to mention a source of dust - do people really put wooden shelves in their datacenters?
The autoignition temperature for generic cheapo plywood is somewhere on the order of 300 degrees C. If you went with pine, which is still pretty cheap, it goes up to 427 degrees C.
How hot do you think computers run?
It's not normal operation that would concern me with wooden rack shelves, but failures like this:
http://www.theregister.co.uk/2012/11/26/exploding_computer_vs_reg_reader/ [theregister.co.uk]
http://ronaldlan.dyndns.org/index.php [dyndns.org]
http://www.tomshardware.com/reviews/inadequate-deceptive-product-labeling,536.html [tomshardware.com]
One bad power supply could set the whole cabinet on fire -- and perhaps worse, set off the server room fire suppression system.
Inter-node communication (Score:4, Informative)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Reliability, space, and efficiency (Score:3)
Scientific American article: http://www.scientificamerican.com/article.cfm?id=the-do-it-yourself-superc [scientificamerican.com]
So reusing old hardware (Score:4, Insightful)
Your solution will take 14 servers, connect them with ancient 1GbE interconnect and hope for the best. The interconnect for clusters REALLY matters, many problems are network bound - and not only network bound but latency bound as well. Look at the list of fastest supercomputers and you will barely see Ethernet anymore (especially at the high end) and definitely not 1GbE. Your new boxes will probably come with 10GbE that will definitely help... Especially since there will be fewer nodes to have to talk to (only 2, maybe 4)
The other problem that you will run into is your system will take about 20x the power and 20x the air conditioning bill (yeah - that is a LOT of power there), the modern new system will pay for itself in 9-12 months (and that doesn't include the tax deduction for donating the old systems and making them Someone Else's Problem)
Recycling old hardware always seems like fun. At the end of a piece of hardware's life cycle look at what it will actually cost to keep it in service - Just the electricity bill will bite you hard, then you have the maintenance, and fun reliability problems.
Re: (Score:2)
sell them and buy new.... (Score:5, Informative)
e8600 34
i7-3770 130
x4 the performance
and use the $1400 towards buying Qty.4 lga1155 motherboards (4x$80), 4 unlocked K series i7's (4x$230) and 4x8Gb of DDR3 RAM (4x$40), 4x ~3-400W budget power supplies (4x $30) = $1520
Use a specialized clustering OS (linux) and have a smaller, easier to manage system, with lots more DDR 3 memory and lower electricity (and AC electricity) bill....
Not another cluster... (Score:3)
Unless you have a large number of identical machines capable of PXE booting and the necessary network hardware to wire them all together, you are really just building a maintenance nightmare. It might be fun to play with a cluster, but you'd do better to buy a couple of machines with as many cores as you can. It will take less space, less power, less fumbling around with configurations, less time and likely be cheaper than trying to cram all the old stuff into some random rack space.
If you insist on doing this, I suggest the following. 1. Only use *identical* hardware. (Or at least hardware that can run on exactly the same kernel image, modules and configurations) with the maximum memory and fastest networks you can. 2. Make sure you have well engineered Power supplies and cooling. 3. PXE boot all but one machine and make sure your cluster "self configures" based on the hardware that shows up when you turn it on because you will always have something broken. 4. Don't use local storage for anything more than swap, everything comes over the network... 5. Use multiple network segments, split between storage network and operational network.
By the way... For the sake of any local radio operations, please make sure you don't just unpack all the hardware from it's cases and spread it out on the work bench. Older hardware can be really big RFI generators. Consider keeping it in a rack that offers at least some shielding.
Cold side, hot side (Score:2)
Would it be best to orient the shelves (and thus the fans) in the same direction throughout the cabinet, or to alternate the fan orientations on a shelf-by-shelf basis?
Keep them all the same, so that the system works as one big fan, pulling cool air from one side of the cabinet and exhausting hot air from the other. It's easiest to visualize if you imagine the airflow with a simple scenario. Imagine you had all of the even numbered shelves facing backward, blowing hot air to the front of the rack, while all the odd numbered shelves were trying to suck cool air from the front. That would totally fail because the odd numbered shelves would be sucking in hot air blown out
Watts X 3 = BTU (Score:2)
ave a 2 ton (24000 BTU) air-conditioner which will be able to maintain a cool room temperature (the lab is quite small)
1 BTU is 0.29 watt/hour. So take your total power usage and multiply by three. That's how many BTU of heat the rack will diisipate (all power eventually turns to heat). That's how much ADDITIONAL cooling you'll need beyond what's already used to keep the room cool.
Sounds like a waste of time (Score:2)
Raspberry Pi. (Score:1)
Raspberry Pi.
http://www.tomshardware.com/news/Raspberry-Pi-Supercomputer-Legos-Linux,17596.html
14 cpu's from 5 years ago (Score:2)
Why not give them away and buy 2 i7 26xx or better CPU's for the same performance? You could fit that in 1U instead of a 42U rack. No switch required, smaller UPS required, less aircon load, less electricity.
Microwulf (Score:2)
I've built one, it works, but there are caveats (Score:3, Interesting)
We have a cluster at my lab that's pretty similar to what the submitter describes. Over the years, we've upgraded it (by replacing old scavenged hardware with slightly less old scavenged hardware) and it is now a very useful, reasonably reliable, but rather power-hungry tool.
Thoughts:
- 1GbE is just fine for our kind of inherently parallel problems (Monte Carlo simulations of radiation interactions). It will NOT cut it for things like CFD that require fast node-to-node communication.
- We are running a Windows environment, using Altair PBS to distribute jobs. If you have Unix/Linux skills, use that instead. (In our case, training grad students on a new OS would just be an unnecessary hurdle, so we stick with what they already know.)
- Think through the airflow. Really. For a while, ours was in a hot room with only an exhaust fan. We added a portable chiller to stop things from crashing due to overheating; a summer student had to empty its drip bucket twice a day. Moving it to a properly ventilated rack with plenty of power circuits made a HUGE improvement in reliability.
- If you pay for the electricity yourself, just pony up the cash for modern hardware, it'll pay for itself in power savings. If power doesn't show up on your own department's budget (but capital expenses do), then by all means keep the old stuff running. We've taken both approaches and while we love our Opteron 6xxx (24 cores in a single box!) we're not about to throw out the old Poweredges, or turn down less-old ones that show up on our doorstep.
- You can't use GPUs for everything. We'd love to, but a lot of our most critical code has only been validated on CPUs and is proving very difficult to port to GPU architectures.
(Posting AC because I'm here so rarely that I've never bothered to register.)
Go ask the guys (Score:3)
Racks... (Score:1)
Actual experience (Score:3, Interesting)
I've done this. Starting with a couple of racksful of PS/2 55sx machines in the late '90s and continuing on through various iterations, some with and some without budgets. I currently run an 8-member heterogenous cluster at home (plus file server, atomic clock, and a few other things), in the only closet in the house that has its own AC unit. It's possible I know something about what you're doing.
Some of what I'll mention may involve more (wood) shop or electrical engineering than you want to undertake.
My read of your text is that there is a computer lab that will be occupied by people that will also contain this rack with dismounted Optiplex boards and P/Ss. This lab has an A/C unit that you believe can dissipate the heat generated by new lab computers, occupants, these old machines in the rack, and the UPSs. I'll take your word, but be sure to include all the sources of heat in your calculation, including solar thermal loading if, like me, you live in "the hot part of the country". Unfortunately, this eliminates the cheapest/easiest way of moving heat away from your boards -- 20" box fans (e.g. http://www.walmart.com/ip/Galaxy-20-Box-Fan-B20100/19861411 ) mounted to an assembly of four "inward pointing" boards. These can move somewhat more air than 80 mm case fans, especially as a function of noise. One of the smartest thermal solutions I've ever seen tilted the boards so that the "upward slope" was along the airflow direction -- the little bit of thermal buoyancy helped air arriving at the underside of components to flow uphill and out with the rest of the heated air. I.e., this avoided a common problem of unmodeled airflow systems of having horizontal surfaces that trapped heated air and allowed it to just get hotter and hotter.
Nevertheless, the best idea is to move the air from "this side" to "that side" on every shelf. Don't alternate directions on successive shelves. If you're actually worried about EMI, then you must have an open sided rack (or you shouldn't be worried). One option is to put metal walls around it, which will control your airflow. Another option that costs $10 is to make your own Faraday cage panels however you see fit. (I've done chicken wire and I've done cardboard/Al foil cardboard sandwiches. Both worked.)
You should probably consider dual-mounting boards to the upper *and* lower sides of your shelves. Another layout I've been very happy with is vertical board mounts (like blades) with a column of P/Ss on the left or right.
A *really* good idea for power distribution is to throw out the multiple discrete P/Ss and replace them with a DC distribution system. There's very little reason to have all those switching power supplies running to provide the same voltages over 6 feet. The UPSs are the heaviest thing in your setup; putting them at the bottom of the rack is probably a good idea. They generate some heat on standby (not much) and a lot more when running. Of course, when they're running, the AC is (worst case) also off and at least one machine should have gotten the "out of power" message and be arranging for all the machines to "shutdown -h now".
You only plan on having two cables per machine (since your setup seems KVM-less and headless), so wire organization may not be that important. (Yes, there are wiring nazis. I'm not one.) Pick Ethernet cables that are the right length (or get a crimper, a spool, and a bag of plugs and make them to the exact length). You'll probably get everything you need from 2-sided Velcro strips to make retaining bands on the left and right columns of the rack. Label both ends of all cables. Really. Not kidding. While you're at it, label the front and back of every motherboard with its MAC(s) and whatever identifiers you're using for machines.
What's the Goal? (Score:2)
Why? (Score:2)
As someone who's built and manages clusters... (Score:3)
Pulling the system out of the case seems... odd. Are you that short on space that you can't have another rack?
Several reasons:
1. dust
2. static
3. a. cooling: real servers have plastic shrouds to guide the air from the fans through the heat sinks. Without that,
the cooling won't be anywhere near as good, and possibliy not good enough to keep them from shutting
down when they're being run hard.
b. DO NOT ALTERNATE directions. In data centers, in server rooms, etc, you have all in a row facing the same
way, and blow your cool air towards the front, and let it get somewhat warm behind. This is how they're designed
to be used.
UPSes on the bottom: sure. I've put some in the middle of the rack, but those are rack-mount. MAKE SURE that you leave clearance to open 'em up when you need to replace the batteries.
NOTE: when you buy replacement batteries for these UPSes, UNDER NO CIRCUMSTANCES BELIEVE ANY MANUFACTURER OR RESELLER. TELL THEM THAT IF THEY DON'T SEND YOU HR - HIGH RATE - BATTERIES, YOU WILL SEND THEM BACK. APC rackmounts WILL NOT ACCEPT *A*N*Y*T*H*I*N*G* but an HR battery, and continue to tell you that you need to replace if, forever.
I'm assuming you'll be running linux. I'm also assuming that you're using this for heavy duty computing, not load balancing or H/A (high availability).
For clustering, also check out torque, which is a standard clustering package, though it does need the jobs to be parallel processing aware.
For the person who mentioned "time" as a cost: I'd assume that the OP was asked to do this "as time permitted", and is certainly something to do that's useful, as opposed to playing solitaire, waiting for something to need work....
mark
Plan9 (Score:1)
Whatever happened to Plan9?
If you are serious about this project I would use Plan 9 because it is designed to use all of your hardware transparently. They can always use more members in this small community. You might find this underrated platform quite delightful:
http://plan9.bell-labs.com/plan9/ [bell-labs.com]
Ignore all the naysayers. Just having experience with Plan 9 makes this experiment worth it.
Re:Just use Amazon AWS (Score:5, Informative)
It's 2013 don't build your own cluster just use AWS EC2 spot instances.
An EC2 "High CPU Medium" instance is probably close to his Core 2 Duo's (it has 1.7GB RAM + two cores of 2.5 EC2 compute units each (each ECU is equivalent to a 2007 era 1.2Ghz Xeon).
Current spot pricing is $0.018/hour, so a month would cost him around $12.96. (not including storage, add about a dollar for 10GB of EBS disk space).
If his computers use 150W of power each, at $0.12/KWh, they'll cost exactly $0.018 -- the same price as an EC2 instance excluding storage.
However spot pricing is not guaranteed, so he'll have to be prepared to shut down his instances when the spot price rises above what he's willing to pay -- full price for the instance is $0.145/hour, but he could get that down to $0.09/hour if he's willing to pay $161 to reserve the instance for 3 years.
Re: (Score:3)
The" Cluster Compute" instances might be better suited to cluster computing, although they're not cheap. But a single one of them, a dual-CPU eight core Xeon E5-2670 (dedicated, so they don't list EC2 compute units), probably has more computing power than the entire Core 2 Duo cluster being proposed.
But as I said, not cheap. It comes out to $400 per month for a reserved instance. A spot instance could be slightly cheaper. Then again, at the 150W of power usage you specified, times 1.8 to use the industry ty
Re: (Score:2)
Also, starting 1000 nodes at once for a task is nothing short of awesome.
Depends (Score:2)
Re: (Score:2)
it would be cheaper and faster to replace those 14 computers you already own with 4 brand new computers whose processors alone cost more than $500 each
FTFY.
Strange idea of "cheaper" you've got there.
Re: (Score:2)
3. "You should spend additional money to pay for more efficient machines rather than the computer you already have which are paid for, because money grows on trees, I place no value on your learning exercise, and I assume the electricity comes right out of your departmental budget exactly the same way purchase hardware would."
I always love that - I work for someone, the goal is to get the largest value out of the money spent... regardless of who's budget it is. This is how we end up with a bureaucracy that does very stupid things like deploying old hardware that will cost more in 6 months in power than an updated environment will cost including new systems, its electricity and its cooling. The money for power does not come out of trees, it is a real cost to the whole organization