Startup's Submerged Servers Could Cut Cooling Costs 147
1sockchuck writes "Are data center operators ready to abandon hot and cold aisles and submerge their servers? An Austin startup says its liquid cooling enclosure can cool high-density server installations for a fraction of the cost of air cooling in traditional data centers. Submersion cooling using mineral oil isn't new, dating back to the use of Fluorinert in the Cray 2. The new startup, Green Revolution Cooling, says its first installation will be at the Texas Advanced Computing Center (also home to the Ranger supercomputer). The company launched at SC09 along with a competing liquid cooling play, the Iceotope cooling bags."
Or (Score:5, Insightful)
Until you have to try and RMA that CPU :)
Re:Or (Score:4, Insightful)
Don't forget the problems you run into when the server decides to spring a leak. Old servers and old cars would have the same level of sludge and oil puddles below them.
And the weight of the servers will be higher too.
Re: (Score:1)
Sprinkler test in 3..2..1...
Re: (Score:2)
A leak? Have you worked with transformer oil??
That is about the nastiest stuff you can think of.
Imagine you got a mouse on your desk, that is connected to such a computer.
Then the oil will slowly travel into the connector, trough the inside of the cable, up to your mouse, and spread as a very thin oily film all over your goddamn desk! Now add dust to it, and you got a really nasty mess. Good luck cleaning that up! At least your mouse, keyboard, display, etc, can go straight to the trash.
And that’s the
Re: (Score:1, Funny)
What a verbose way of saying "And the servers will be heavier too."
Interestingly enough, I find that you have chosen such a long-winded method of communication for expressing your thoughts regarding the excessive wordiness of the father's father post (the grandfather) that I cannot help but notice how impaired your message is by your excessive use of words.
TL;DR would have worked.
Re: (Score:3, Funny)
Consider that 99% of the information stored in computers is either bullshit, the analysis of bullshit, or the selling of bullshit to the bullshitted entirely denominated in fiat bullshit monetary units. Consider that the lower floors under that I.T. floor would most likely be staffed by management, sales and marketing wanks. I'm seeing very little downside to your so-called "disaster".
Re: (Score:2)
Porn, cute kittens, weird youtube videos and Slashdot are also contributing.
how much does it cost? (Score:5, Interesting)
the new Xeon 5600's run at less power than previous CPU's. and SSD's also run a lot cooler. how much does this liquid cooling enclosure cost and what is the performance compared to just upgrading your hardware?
HP is going to ship their Xeon 5600 servers starting on the 29th
Re:how much does it cost? (Score:4, Funny)
Thanks for conveniently letting us know that HP's new server, based on the Xeon 5600, is shipping soon. I'll be sure to look out for that HEWLETT PACKARD server coming soon, with a Xeon 5600. On the 29th. I'll be looking for it.
Re: (Score:2)
I prefer to buy my servers from Dell, and if you can't take it, then I'll see you on July 12, 8 o'clock 9 pm central on pay per view! at the Royal Rumble in Las Vegas!
As someone who HAS built & run oil immersed.. (Score:5, Informative)
..computers, allow me to label this a "fad"
The idea is funky, but to get good cooling you want convection (every joule of pump energy from a circulating pump gets transferred into the oil at yet more heat) which means deep tanks which means, to the server environment, goodbye high density.
The ONLY thing that has changed since I was doing this is the affordability of SSDs, which mean that now it is practical to immerse the whole computer, and the mass storage too, which makes things a lot simpler and cheaper, and means you really can be JUST oil cooled, not oil cooled mainly, except for air cooled HDs etc.
TOP TIP from an old hand.
If you are going to oil cool by immersion, buy the latest top quality hardware, because once immersed it stays there, you'll only pull it once to see why it sucks.
BIGGEST mistake experimenters make is using old hardware, cos you always end up playing with it, making mess, ahh fsckit..
Nota Bene if you are building one of these in anger, make allowances for the significant increase in the weight that the oil makes.
HTH etc
Forgot to say why I oil cooled. (Score:2)
It was in order to build a totally silent computer, the cooling aspect worked OK, nothing spectacular, not of you layout the case properly, buy fans with decent blade profiles and proper bearings, and decent aftermarket heatsinks, but the total silence was beautiful... even ATX PSU's do make a noise, you only notice when you immerse *everything*
Re: (Score:3, Interesting)
I'm also curious- is there any kind of fire hazard doing this on a large scale?
There isn't a lot to burn in a normal computer(at least not burn really well) but could a short circuit near a leak lead to a inferno in an oil cooled data centre?
Or is the oil treated in some way to make it less likely to burn?
Re:Forgot to say why I oil cooled. (Score:4, Informative)
Educate thyself: http://en.wikipedia.org/wiki/Mineral_oil#Mechanical.2C_electrical_and_industrial
Just because something CAN burn doesn't make it dangerous to have around potential sources of electrical arcing. Hydrocarbon petroleum products present no real fire/explosion danger unless the substance is warmer than its flash point, which is the temperature above which the liquid substance can evaporate into the air. Below the flash point temperature, oil is only as flammable as plastic. The evaporated fumes mixed into the air are the ignition danger, not the liquid itself.
This is because ongoing hydrocarbon combustion requires steady supplies of freely-mixing HC and oxygen. Sustaining the reaction requires the input of a tremendous volume of oxygen (compared the the liquid fuel volume, anyway), and the oxygen has to get rapidly mixed with the HC. That mixing can't happen quickly enough to the liquid HC. That's why the flash point is such an important consideration--the gaseous HC fumes mix quite well and quickly with atmospheric oxygen, creating nice conditions for a sustained combustion (a fire).
This is even true of gasoline (flash point = -40F). If you pour gasoline into a pail in the middle of a bad Antarctic winter, and you throw a match into the pail, the gasoline will just extinguish the match like a bucket of water.
Of course, if you mix liquid HC with liquid oxygen, or any other eager oxidizers, all bets are off. That shit will explode at cryogenic temperatures if you just look at it funny. (That's how rocket engines work.)
Re: (Score:2)
I think the pyrolysis is kind of secondary, here. Kerosene becomes volatile, on its own, somewhere above 100F, depending on the specific HC composition of the sample. (The lower the average per-molecule carbon chain length, the lower the flash point, as with any non-solid HC.)
But you're probably at least partially correct, because there ought to be some amount of thermal cracking happening at that temperature. The cracking would tend to produce shorter (more volatile) carbon chains.
I just don't think that t
Re: (Score:1)
Not all Oil burns well at atmospheric pressure, or at all for that matter.
Re: (Score:2)
Didn't matter at all in his case since it was in a basement under the British house of Parliment.
On the serious side I used to do electrochemical machining in a deep kerosene bath - that involves passing a high voltage arc and a lot of current through the kerosene. It's harder to get this stuff burning than you would think so long as you take care.
Re: (Score:2)
Why not use a water heat exchanger outside the case to cool the oil (while keeping water away from system components, and getting full contact with the entire system)? The water could then go into a loop to cool it. Other coolants could also be used, although water is great from a heat capacity standpoint.
Since the water doesn't touch anything important, it can be dumped into a cooling tower/etc.
To cool one system I doubt it is worth all the trouble, but for a datacenter I bet you could make it very effic
Re: (Score:2)
How do you build a server 'in anger'?
Re: (Score:2)
...you want convection (every joule of pump energy from a circulating pump gets transferred into the oil at yet more heat) which means deep tanks which means, to the server environment, goodbye high density.
Really? You could say the same about air moved by a fan (that the fan's energy contributes to the overall heat). I'm no expert in this area, but I've seen liquid cooled PCs and the only big component is the radiator. I would think you could pack liquid cooled components more densely than air cooled, and you could put the radiator in another room.
Re: (Score:1)
Just curious, and you seem like the guy to ask, has anyone done full center immersion? With the proliferation of shipping container rack systems, would it be possible to seal the entire container into one giant unit with a manhole on the top, then drop in a diver with either tanks or a line and let them do maintenance without worries of spillage? You'd be able to keep the same density as is currently used, since you'd be able to use the normal maintenance space as space for convection currents and the nor
Re: (Score:2)
Re: (Score:2)
Yes. Works for SSDs, kills HDDs.
Re:As someone who HAS built & run oil immersed (Score:5, Insightful)
Oil immersion may or may not be more efficient, but it doesn't seem like it would scale well. In a large data center where some hardware component is failing on a daily basis, because you have tens of thousands of servers, keeping all that oil contained within the enclosures would be a major challenge. During maintenance, that stuff is going to be getting all over everything, including the tech, who can easily spread it all over anything he touches before he gets around to cleaning up. You'd need a cleaning crew out on the floor constantly.
Re: (Score:2)
"modern data center [...] Oil immersion may or may not be more efficient, but it doesn't seem like it would scale well. In a large data center where some hardware component is failing on a daily basis, because you have tens of thousands of servers"
In a modern, large datacenter you don't repair each failing component; you just redrive your computing load around it.
Re:As someone who HAS built & run oil immersed (Score:4, Funny)
Cue tech in scuba gear swimming down through the oil to change a power supply.
Re: (Score:1)
Cue tech in scuba gear swimming down through the oil to change a power supply.
Would this be a good application for a robot?
Re: (Score:2)
Re: (Score:2)
Re: (Score:1, Insightful)
You are making the assumption that individual servers or even racks must be changed out regularly. Considering how many datacenters are no longer space constrained, but rather power constrained by the number of KW/square foot, or cooling restrained due to local regulations or power consumption, other approaches are valid.
The opposite conclusion is a containerized datacenter/rack cluster using oil immersion as the internal primary coolant, hooking into a datacenter fed cold water heat exchanger mounted at th
Maintaince Access? (Score:5, Interesting)
How much harder does it make doing standard move cables/switch harddrives/change components maintenance?
One of the advantages of a standard rack to me is that all of that is fairly easy and simple, so you can fix things quickly when something goes wrong.
Re: (Score:2)
Re: (Score:2)
Agreed, although if this became standard and built into racks then maybe each server would just have a button next to it that pumped out the coolant quickly. Hot-swaps probably wouldn't work inside the case itself, since you'd have to remove the coolant to perform this task.
Alternatively, you could perform a hot swap immersed in oil if you did it quickly - the oil probably couldn't be circulated with the case open but it would at least be there. I'm not sure that this would actually buy you much though, a
Re:Maintaince Access? (Score:5, Funny)
scuba gear and lessons for all sys admins!! All datacenters could just be a giant pool of swirling oil.
Re: (Score:1)
If the external cooling for the oil failed, you might end up with some mighty crispy techs...
Just in case, have them roll in breading before going in; then you could at least salvage the meat :-D
Mmm... Country Fried Tech...
Re: (Score:3, Funny)
Do we get old timey shirts with our name on them too?
"I see, ahh, your problem here maam. Your server rack is down a few pints. I'll top it off and put it on the lift and check the pump too."
Submerged data center (Score:2)
Re: (Score:2)
Anyway, this is much less interesting. Oh well.
Re: (Score:2)
Nope, you're not the only one. I had a vision of sysadmins in SCUBA gear doing hardware swaps.
Oh yuck. (Score:2)
You'll obviously need to be scaling before you invest in a system that involves a big vat full of oil.
Also, what does the fire marshall think of a big vat full of oil? Hazardous disposal? Oh boy... some company goes BK, and they leave behind a big vat full oil and outdated electronics.
I didn't dig deep enough to see if they are actively pumping the oil or not. If they are, they're not doing it right. Any system that really cuts cooling costs should be using a LTD engine to transform the heat into useful
Re: (Score:3, Insightful)
You don't need oil-air heat exchangers, oil vats, or anything of the kind. What you need is chilled WATER, which is already generated by cooling plants. Run this water to each server using simple pipes and a large pump for the whole facility, and then put an oil/water heat exchanger inside each chassis, along with a pump to circulate the oil.
Is the efficiency going to be better? Maybe, maybe not, who cares. What's different is that cooling is much easier with 3/8" pipes of water rather than worrying abo
Re: (Score:2)
My initial thoughts were "Why on earth would you use the engine from an LTD [wikipedia.org]?"
My ambiguous Wikipedia search revealed that you were in fact referring to a Stirling engine (aka. a low temperature difference engine).
Re: (Score:1)
So. That leads us to the questions: Is your overall system efficiency going to be better in some way by running hotter?
As someone who has taken a class in electronics I can assure you that the efficiency of electronic equipment drops with increases in temperature as leaking currents are increasing. This may even lead to a thermal run-away situation.
Running hot is also pretty bad as far as reliability goes.
Re: (Score:2)
Re: (Score:2, Interesting)
How longer before we re-invent the mainframe? (Score:1)
Re: (Score:1)
2.5 but it will be a mainframe that is powered by GPU's
Re: (Score:2)
Sort of like installing little Linux LPARs and such. Very amusing.
Mainframes are still the very best power/performance out there... and probably always will be :)
Re: (Score:2)
If you think about it, a "server farm" really isn't that different from a "mainframe"; it's a whole bunch of CPUs working in parallel, all packed into one room. The only real difference is that most server farms are implemented with separate OSes on each system, instead of a single OS for the whole thing, which is good for redundancy and partitioning but not so great for efficiency. It'd be a lot more simple and efficient if we just had one big OS for the whole system, with different users using different
Re: (Score:2)
Well, if you're starting a pool, throw in a cloud of servers and you'll be the pioneer.
Come to think of it, I'll refrain from betting on this one, when you so poised to control the outcome, odds are I'll lose.
Re: (Score:2)
I'm starting a pool. How much longer before the mainframe is re-invented to power cloud computing. I'm taking 1.5 years. Any other bets?
Already happened. Seriously. How do you define "mainframe"? Let's look at the "characteristics" section of wikipedia's article on them:
* ability to run (or host) multiple operating systems, and thereby operate not as a single computer but as a number of virtual machines
It's quite common for any server type now to do this.
* add or hot swap system capacity non disruptively
Submerged hard disks? (Score:3, Interesting)
Hard disks aren't sealed, there's always (at least, on the dozens of disks I've taken apart) a little felt-pad or sticker covered vent on them. I figured it was for equalisation or something crazy, but I'm not positive.
Given hard disks aren't sealed, wouldn't they fill with fluid and assuming they'd still function with a liquid screwing up the head mechanism (given modern disk's head's float above the platter surface on a cushion of air) wouldn't the increased viscosity slow down seek events?
Re: (Score:2)
Solid state disks.
Essentially, if it has moving parts, it probably stays in air, and uses either conventional air cooling or contact non-submergence liquid cooling.
Re: (Score:3, Informative)
In the embedded video, they indicate that hard disks need to be wrapped in some material the vendor apparently provides, presumably for just this reason. Not sure how well the wrapping transfers heat.
Re: (Score:3, Interesting)
No, the fluid would completely ruin the hard drive because they're not designed for that.
There's two ways around this problem that I see:
1) Use SSD disks instead of mechanical platter HDs.
2) Use regular HDs, but do not submerge them in the cooling oil. Instead, put them in some type of aluminum enclosure which conducts the heat to the cooling oil, but keeps it from contacting the HD itself, sort of like what the water-cooling enthusiasts do for their hard drives today.
And yes, I believe you're correct abou
Re: (Score:2)
Same Thermal Output (Score:3, Insightful)
Re: (Score:2)
Re: (Score:2)
You do not pump the oil. You setup a current of heat via your layout. If you are pumping the oil you are doing it wrong.
Re: (Score:2)
You can easily pump oil with a positive displacement pump. It's quieter, too - our 10-ton hydraulic press makes less noise than my home PC (with only one SilenX fan in it).
Density isn't really an issue because for all the extra mass you're pumping you taking away something on the order of 5 times the heat per unit mass of coolant (air/oil) that you pump. Aerofoils aren't my area so I'll have to ask someone else to comment on the efficiency of a bladed fan vs. a PD pump, i.e. the energy needed to move 1kg of
Re: (Score:1)
Re: (Score:3, Interesting)
Air actually has a very high thermal resistance so one needs to use forced circulation to actually transport moderate amounts of heat. Running all those fans uses more energy. In fact in any closed room, running a fan may cause objects immediately in front of the fan to be cooled, but overall the room is heating up from the power use.
Oil has a very low thermal resistance naturally so one can use ordinary convection instead (up to some point).
A less messy solution would be for servers to be made with integ
Re: (Score:2, Interesting)
The company's website [grcooling.com] claims that it's easier to cool oil than to cool air. Their argument is that conventional air cooling requires 45 degree F air to keep components at 105 degree F, whereas the higher heat capacity of the oil lets it come out of the racks at 105F. The oil is hotter than ambient air (at least where I live), so it should be easier to remove its heat (through a heat exchanger) than to chill warm exhaust air back to 45F (through a refrigeration unit). Of course most components can run hot
Re: (Score:3, Interesting)
It's not cheaper to cool oil. However, it's easier, because you can use oil-to-water heat exchangers, and cool the whole server farm with a chilled water plant (like A/C, but only chills water and never uses it to cool air). The benefit of this is that you don't have to worry about airflow, ductwork, and the like, and you can pack servers much more densely into a space than with air cooling. Since floor space in a facility like this is expensive, this saves money. It might also be more efficient to use
Do this for free to be Green. (Score:2)
It might also be more efficient to use chilled water in pipes to cool the servers directly rather than chilling air and blowing that around a big building.
Especially when it's free. I used to work at a medical center with a big data center. Cold city water was run first to the data center, heat-pumped to a cold-air Liebert, and then the slightly-warmer water was piped on to all the places where cold water is used. A degree or two warmer is quite fine at the tap.
Smart downtown-City data centers would work
Re: (Score:2)
Won't work here in Phoenix. Here, in the summertime, there is no "cold" water faucet in your home; there's only "warm" and "hot". Many times, the "warm" faucet is just as hot as the "hot" one.
Of course, I don't know what kind of idiot would locate a datacenter in Phoenix anyway. Except maybe Paypal.
Re: (Score:2)
Heh, that's funny. Fortunately fiber optics run to cold places pretty well.
Re: (Score:2)
Yep. I seriously don't know why all datacenters aren't located in northern climes. I guess many are probably in Calif. because of all the available talent. But there's no talent in Phoenix (what educated people were here have moved out or are in the process. The only ones left are all the zombies working at the local defense contractors like General Dynamics and Honeywell). Why Paypal is located here, I have no idea.
I can't wait to move out of this town, in case it isn't obvious.
Re: (Score:2)
But there's no talent in Phoenix ... Why Paypal is located here, I have no idea.
q.e.d.
Re: (Score:2)
Better yet, just get a solar hot water system and install that on your roof. There's systems which use Pex piping to run water to the panels on the roof, and through a water/water heat exchanger to heat the water in the bottom of the tank. The water drains out of the system into a reservoir tank at night, so that it doesn't freeze in the pipes (which is possible if you simply run main-supply pipes through your attic or a roof panel). The only regular cost is for the electricity to run the pump, which isn
Re: (Score:2)
You're missing something in your sarcasm: in a house, you ultimately have to get the heat into the air. With a server farm, you NEVER need to worry about exchanging heat with the air; you're using cooled fluid to directly cool solid components. Using air as an intermediary is not necessary.
Finally, cast iron radiators actually are an efficient and effective way of heating a home, and were used for a long time until air conditioning was invented. The only reason they got away from it is cost, because most
The test (Score:2)
Oh..... there's something Google didn't think of and try.
"Green Revolution"!!! (Score:1)
Fanless low power servers are the future (Score:4, Interesting)
Re: (Score:2)
So the future is going to be slow, really really slow?
We keep quad socket quad Xeon boards at very high usage all the time. These things are not going to cut it.
Re: (Score:2)
You don't need bigger and bigger individual machines, if you have fast enough IO and your software engineers know WTF they are doing. There are alternative parallel algorithms for practically any problem you'd naively solve in a highly serial way. Given the right programming skill set, we could run just about any web app you care to imagine on a farm of SheevaPlugs (http://en.wikipedia.org/wiki/SheevaPlug). Kind of cute, don't you think?
Why do you think places like Google and the big quant-heavy finance fi
Re: (Score:2)
Don't be an idiot.
Just do like Rsync does: Divide the input data into blocks and hash each block separately.
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
Mainframe (Score:5, Interesting)
Re: (Score:1, Informative)
Ah yes, the good old days...
As I remember it, there were a couple of levels of coolant that were used to cool off a mainframe - some mystery liquid was pumped around through tubes that would flow by the chips needing cooling- it had all the necessary qualities, including being non-conductive in case of a leak. Then that liquid was pumped through a heat exchanger where the heat would get transferred to distilled water which was then pumped to some cooling unit (up on on the roof in our case).
I still rememb
Re: (Score:1)
Re: (Score:2)
More details on Cray 2 cooling (from one who was t (Score:2, Interesting)
Re: (Score:2)
this was ECL logic
And there I was thinking they went straight from TTL logic to CMOS logic logic.
mineral oil (Score:2)
Fluorinert [wikipedia.org] is not mineral oil [wikipedia.org], nor even very similar to mineral oil.
GRC and Iceotope discuss each others' products (Score:2)
Basically, it looks like a simple solution (a bath) versus a more complex one (individual sealed blades). The discussion is here at eWEEK Europe UK [eweekeurope.co.uk].
Peter Judge UK Editor, eWEEK Europe
Density? (Score:2)
The article describes a system where servers are stored in what is essentially a rack laid down on the ground and filled with oil. Now, this is going to be too heavy, I would have thought, to be able to support any off the ground, so you're limited to only using the bottom 60cm or so of each room in your datacenter for server storage. Isn't this going to mean you only get half as many servers in there?
Video and Interview (Score:2)
I interviewed [youtube.com] these guys at SC09 for Linux Magazine. There are some close up shots of the servers in the oil.
Distilled water (Score:2)
Is it at all feasible to run a computer submerged in distilled water? You'd have to ensure that the water remains pure, obviously, but this might be easier than dealing with computers submerged in oil. The obvious advantage is that distilled water is more benign and MUCH easier to work with. Any spills can be cleaned up with a rag, for one thing.
Re:Ease of Service (Score:4, Insightful)
Re:Ease of Service (Score:4, Insightful)
With 30,000 servers in a big room, you do NOT want anyone "fiddling" with them at all. They need to be removed from the room and taken someplace else to be "fiddled" on.
Here's an idea. This would require a chassis redesign, but it would remove the maintenance problems mostly.
Make a special case for each system, which has no fans (since they're only useful for air-cooled systems), and has some type of pump for circulating the cooling oil. In this circulation loop is a heat exchanger, one built into each chassis. The backside of the chassis has two quick-connect connectors for connection to a cooling water supply. These are the type of connectors that close when they're unplugged. Such connectors are both on the water supply, and on the chassis. This way, when a server malfunctions, all the tech has to do is unplug it and pull it out of the rack. The water connectors will disengage, so only a few drops of water will spill (which will evaporate quickly). All the cooling oil will be contained within the server chassis.
The server can then be taken to a designated maintenance area where the oil can be drained and the server operated on, and then refilled with oil and plugged back into the server rack.
Re: (Score:2)
You really don't want to take an entire rack offline just to fix one server.
Re: (Score:2)
Who said anything about taking an entire rack offline? My idea was to make each server (with multiple servers per rack, obviously) self-contained with its own water connection, and easily removable without disturbing the other servers in the rack.
Re: (Score:2)
Re: (Score:2)
Just put grating just above the floor, so any liquid ends up under the grating. The truth is, if the potential costs saving are significant, then people will get creative and find ways to deal with the 'minor' issues.
Re: (Score:1)
someone urinates in the cooling liquid, that is.
Just keep Tyler Durden out of the computer room and you'll be fine.
Re: (Score:2)
Ping times go way up?
Re: (Score:2)
Because the technology business is staffed with armies of amateurs who don't understand how to properly implement "lights-out management" at their datacenters. They somehow feel safer, warmer, and fuzzier because they can physically drive to their servers at 3am to press a reboot switch, or pop a CD-ROM into a tray.
Those of us who know better invest in per-port IP-KVM switching with virtual USB media support, plus remote power control. We can hard power cycle a crashed server from the beach, using MidpSSH o