Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Hardware IT

Startup's Submerged Servers Could Cut Cooling Costs 147

1sockchuck writes "Are data center operators ready to abandon hot and cold aisles and submerge their servers? An Austin startup says its liquid cooling enclosure can cool high-density server installations for a fraction of the cost of air cooling in traditional data centers. Submersion cooling using mineral oil isn't new, dating back to the use of Fluorinert in the Cray 2. The new startup, Green Revolution Cooling, says its first installation will be at the Texas Advanced Computing Center (also home to the Ranger supercomputer). The company launched at SC09 along with a competing liquid cooling play, the Iceotope cooling bags."
This discussion has been archived. No new comments can be posted.

Startup's Submerged Servers Could Cut Cooling Costs

Comments Filter:
  • Or (Score:5, Insightful)

    by sabs ( 255763 ) on Thursday March 18, 2010 @04:01PM (#31527626)

    Until you have to try and RMA that CPU :)

  • Re:Ease of Service (Score:4, Insightful)

    by eln ( 21727 ) on Thursday March 18, 2010 @04:16PM (#31527926)
    In any kind of a large data center environment the whole floor is going to be covered in that shit in short order. I can just imagine the fun of dealing with workman's comp claims every other week because someone slipped on liquid coolant on the floor and injured themselves. Even with high quality components, if you have 30,000 servers in a big room, you're going to have someone out there fiddling with one or more of them on a daily basis, and keeping things clean when they're all fully immersed like that would be next to impossible, especially if you're dealing with oil.
  • Re:Or (Score:4, Insightful)

    by Z00L00K ( 682162 ) on Thursday March 18, 2010 @04:19PM (#31527992) Homepage Journal

    Don't forget the problems you run into when the server decides to spring a leak. Old servers and old cars would have the same level of sludge and oil puddles below them.

    And the weight of the servers will be higher too.

  • by eln ( 21727 ) on Thursday March 18, 2010 @04:24PM (#31528118)
    There are other ways to make data center cooling more efficient, such as hot aisle containment and individual rack-top coolers blowing cold air directly in front of the racks. There's no reason a modern data center needs to move entire buildings full of air anymore, even without liquid cooling.

    Oil immersion may or may not be more efficient, but it doesn't seem like it would scale well. In a large data center where some hardware component is failing on a daily basis, because you have tens of thousands of servers, keeping all that oil contained within the enclosures would be a major challenge. During maintenance, that stuff is going to be getting all over everything, including the tech, who can easily spread it all over anything he touches before he gets around to cleaning up. You'd need a cleaning crew out on the floor constantly.
  • by TheNinjaroach ( 878876 ) on Thursday March 18, 2010 @04:35PM (#31528352)
    Won't these servers bathed in oil still have the same thermal output? I don't understand why it would be cheaper to cool oil than it would air or any other medium..
  • Re:Ease of Service (Score:4, Insightful)

    by Grishnakh ( 216268 ) on Thursday March 18, 2010 @04:48PM (#31528570)

    With 30,000 servers in a big room, you do NOT want anyone "fiddling" with them at all. They need to be removed from the room and taken someplace else to be "fiddled" on.

    Here's an idea. This would require a chassis redesign, but it would remove the maintenance problems mostly.

    Make a special case for each system, which has no fans (since they're only useful for air-cooled systems), and has some type of pump for circulating the cooling oil. In this circulation loop is a heat exchanger, one built into each chassis. The backside of the chassis has two quick-connect connectors for connection to a cooling water supply. These are the type of connectors that close when they're unplugged. Such connectors are both on the water supply, and on the chassis. This way, when a server malfunctions, all the tech has to do is unplug it and pull it out of the rack. The water connectors will disengage, so only a few drops of water will spill (which will evaporate quickly). All the cooling oil will be contained within the server chassis.

    The server can then be taken to a designated maintenance area where the oil can be drained and the server operated on, and then refilled with oil and plugged back into the server rack.

  • Re:Oh yuck. (Score:3, Insightful)

    by Grishnakh ( 216268 ) on Thursday March 18, 2010 @04:57PM (#31528724)

    You don't need oil-air heat exchangers, oil vats, or anything of the kind. What you need is chilled WATER, which is already generated by cooling plants. Run this water to each server using simple pipes and a large pump for the whole facility, and then put an oil/water heat exchanger inside each chassis, along with a pump to circulate the oil.

    Is the efficiency going to be better? Maybe, maybe not, who cares. What's different is that cooling is much easier with 3/8" pipes of water rather than worrying about ductwork and A/C units. This will also allow you to have much, much higher server density than with air cooling; fluid is a much better (and denser) conductor of heat than air. Instead of wasting a lot of space on fans and ductwork and other places for air to flow, you only have to worry about some little pipes. Floor space is expensive in a facility like this.

    And if you keep the cooling oil contained within the servers, you won't have to worry about any mess.

  • by Anonymous Coward on Thursday March 18, 2010 @07:51PM (#31530892)

    You are making the assumption that individual servers or even racks must be changed out regularly. Considering how many datacenters are no longer space constrained, but rather power constrained by the number of KW/square foot, or cooling restrained due to local regulations or power consumption, other approaches are valid.

    The opposite conclusion is a containerized datacenter/rack cluster using oil immersion as the internal primary coolant, hooking into a datacenter fed cold water heat exchanger mounted at the end of the container. With that you would nominally design for a specific rated giggflop/Gbps for the container as a whole. At first, you have more than that, but as devices fail, you fail in place. When the container performance drops below rated, you swap out the whole container. Considering the depreciation and rated lifecycle/lifetime of servers, this is not an unreasonable proposition. Say, expected rated lifetime of 3 years. Swapping containers with the container manufacturer as a part of a trade-in/leased pool financing plan. When the container is brought in for maintenance, the equipment and personnel necessary to deal with oil immersion are available. The manufacturer can refurbish/replace the servers within to return to the lease pool, or if the conclusion is that it isn't cost effective to do so, drain the bastard and sell off the remaining servers to recycle the container or simply sell off the old container whole as a "below rated" or EoL product.

And it should be the law: If you use the word `paradigm' without knowing what the dictionary says it means, you go to jail. No exceptions. -- David Jones

Working...