Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Hardware IT

Startup's Submerged Servers Could Cut Cooling Costs 147

1sockchuck writes "Are data center operators ready to abandon hot and cold aisles and submerge their servers? An Austin startup says its liquid cooling enclosure can cool high-density server installations for a fraction of the cost of air cooling in traditional data centers. Submersion cooling using mineral oil isn't new, dating back to the use of Fluorinert in the Cray 2. The new startup, Green Revolution Cooling, says its first installation will be at the Texas Advanced Computing Center (also home to the Ranger supercomputer). The company launched at SC09 along with a competing liquid cooling play, the Iceotope cooling bags."
This discussion has been archived. No new comments can be posted.

Startup's Submerged Servers Could Cut Cooling Costs

Comments Filter:
  • Or (Score:5, Insightful)

    by sabs ( 255763 ) on Thursday March 18, 2010 @03:01PM (#31527626)

    Until you have to try and RMA that CPU :)

    • Re:Or (Score:4, Insightful)

      by Z00L00K ( 682162 ) on Thursday March 18, 2010 @03:19PM (#31527992) Homepage Journal

      Don't forget the problems you run into when the server decides to spring a leak. Old servers and old cars would have the same level of sludge and oil puddles below them.

      And the weight of the servers will be higher too.

      • Sprinkler test in 3..2..1...

      • A leak? Have you worked with transformer oil??
        That is about the nastiest stuff you can think of.
        Imagine you got a mouse on your desk, that is connected to such a computer.
        Then the oil will slowly travel into the connector, trough the inside of the cable, up to your mouse, and spread as a very thin oily film all over your goddamn desk! Now add dust to it, and you got a really nasty mess. Good luck cleaning that up! At least your mouse, keyboard, display, etc, can go straight to the trash.

        And that’s the

  • by alen ( 225700 ) on Thursday March 18, 2010 @03:01PM (#31527630)

    the new Xeon 5600's run at less power than previous CPU's. and SSD's also run a lot cooler. how much does this liquid cooling enclosure cost and what is the performance compared to just upgrading your hardware?

    HP is going to ship their Xeon 5600 servers starting on the 29th

    • by Anonymous Coward on Thursday March 18, 2010 @03:43PM (#31528496)

      Thanks for conveniently letting us know that HP's new server, based on the Xeon 5600, is shipping soon. I'll be sure to look out for that HEWLETT PACKARD server coming soon, with a Xeon 5600. On the 29th. I'll be looking for it.

      • I prefer to buy my servers from Dell, and if you can't take it, then I'll see you on July 12, 8 o'clock 9 pm central on pay per view! at the Royal Rumble in Las Vegas!

  • by GuyFawkes ( 729054 ) on Thursday March 18, 2010 @03:04PM (#31527696) Homepage Journal

    ..computers, allow me to label this a "fad"

    The idea is funky, but to get good cooling you want convection (every joule of pump energy from a circulating pump gets transferred into the oil at yet more heat) which means deep tanks which means, to the server environment, goodbye high density.

    The ONLY thing that has changed since I was doing this is the affordability of SSDs, which mean that now it is practical to immerse the whole computer, and the mass storage too, which makes things a lot simpler and cheaper, and means you really can be JUST oil cooled, not oil cooled mainly, except for air cooled HDs etc.

    TOP TIP from an old hand.

    If you are going to oil cool by immersion, buy the latest top quality hardware, because once immersed it stays there, you'll only pull it once to see why it sucks.

    BIGGEST mistake experimenters make is using old hardware, cos you always end up playing with it, making mess, ahh fsckit..

    Nota Bene if you are building one of these in anger, make allowances for the significant increase in the weight that the oil makes.

    HTH etc

    • It was in order to build a totally silent computer, the cooling aspect worked OK, nothing spectacular, not of you layout the case properly, buy fans with decent blade profiles and proper bearings, and decent aftermarket heatsinks, but the total silence was beautiful... even ATX PSU's do make a noise, you only notice when you immerse *everything*

      • Re: (Score:3, Interesting)

        by HungryHobo ( 1314109 )

        I'm also curious- is there any kind of fire hazard doing this on a large scale?

        There isn't a lot to burn in a normal computer(at least not burn really well) but could a short circuit near a leak lead to a inferno in an oil cooled data centre?

        Or is the oil treated in some way to make it less likely to burn?

        • by MoralHazard ( 447833 ) on Thursday March 18, 2010 @04:50PM (#31529568)

          Educate thyself: http://en.wikipedia.org/wiki/Mineral_oil#Mechanical.2C_electrical_and_industrial

          Just because something CAN burn doesn't make it dangerous to have around potential sources of electrical arcing. Hydrocarbon petroleum products present no real fire/explosion danger unless the substance is warmer than its flash point, which is the temperature above which the liquid substance can evaporate into the air. Below the flash point temperature, oil is only as flammable as plastic. The evaporated fumes mixed into the air are the ignition danger, not the liquid itself.

          This is because ongoing hydrocarbon combustion requires steady supplies of freely-mixing HC and oxygen. Sustaining the reaction requires the input of a tremendous volume of oxygen (compared the the liquid fuel volume, anyway), and the oxygen has to get rapidly mixed with the HC. That mixing can't happen quickly enough to the liquid HC. That's why the flash point is such an important consideration--the gaseous HC fumes mix quite well and quickly with atmospheric oxygen, creating nice conditions for a sustained combustion (a fire).

          This is even true of gasoline (flash point = -40F). If you pour gasoline into a pail in the middle of a bad Antarctic winter, and you throw a match into the pail, the gasoline will just extinguish the match like a bucket of water.

          Of course, if you mix liquid HC with liquid oxygen, or any other eager oxidizers, all bets are off. That shit will explode at cryogenic temperatures if you just look at it funny. (That's how rocket engines work.)

        • by lukas84 ( 912874 )

          Not all Oil burns well at atmospheric pressure, or at all for that matter.

        • by dbIII ( 701233 )

          I'm also curious- is there any kind of fire hazard doing this on a large scale?

          Didn't matter at all in his case since it was in a basement under the British house of Parliment.
          On the serious side I used to do electrochemical machining in a deep kerosene bath - that involves passing a high voltage arc and a lot of current through the kerosene. It's harder to get this stuff burning than you would think so long as you take care.

    • by Rich0 ( 548339 )

      Why not use a water heat exchanger outside the case to cool the oil (while keeping water away from system components, and getting full contact with the entire system)? The water could then go into a loop to cool it. Other coolants could also be used, although water is great from a heat capacity standpoint.

      Since the water doesn't touch anything important, it can be dumped into a cooling tower/etc.

      To cool one system I doubt it is worth all the trouble, but for a datacenter I bet you could make it very effic

    • How do you build a server 'in anger'?

    • ...you want convection (every joule of pump energy from a circulating pump gets transferred into the oil at yet more heat) which means deep tanks which means, to the server environment, goodbye high density.

      Really? You could say the same about air moved by a fan (that the fan's energy contributes to the overall heat). I'm no expert in this area, but I've seen liquid cooled PCs and the only big component is the radiator. I would think you could pack liquid cooled components more densely than air cooled, and you could put the radiator in another room.

    • Just curious, and you seem like the guy to ask, has anyone done full center immersion? With the proliferation of shipping container rack systems, would it be possible to seal the entire container into one giant unit with a manhole on the top, then drop in a diver with either tanks or a line and let them do maintenance without worries of spillage? You'd be able to keep the same density as is currently used, since you'd be able to use the normal maintenance space as space for convection currents and the nor

    • Are SSD's submersible?
  • Maintaince Access? (Score:5, Interesting)

    by Daniel_Staal ( 609844 ) <DStaal@usa.net> on Thursday March 18, 2010 @03:05PM (#31527712)

    How much harder does it make doing standard move cables/switch harddrives/change components maintenance?

    One of the advantages of a standard rack to me is that all of that is fairly easy and simple, so you can fix things quickly when something goes wrong.

    • Comment removed based on user account deletion
    • by Rich0 ( 548339 )

      Agreed, although if this became standard and built into racks then maybe each server would just have a button next to it that pumped out the coolant quickly. Hot-swaps probably wouldn't work inside the case itself, since you'd have to remove the coolant to perform this task.

      Alternatively, you could perform a hot swap immersed in oil if you did it quickly - the oil probably couldn't be circulated with the case open but it would at least be there. I'm not sure that this would actually buy you much though, a

    • by ArsonSmith ( 13997 ) on Thursday March 18, 2010 @03:44PM (#31528506) Journal

      scuba gear and lessons for all sys admins!! All datacenters could just be a giant pool of swirling oil.

      • If the external cooling for the oil failed, you might end up with some mighty crispy techs...

        Just in case, have them roll in breading before going in; then you could at least salvage the meat :-D

        Mmm... Country Fried Tech...

      • Re: (Score:3, Funny)

        by Hoi Polloi ( 522990 )

        Do we get old timey shirts with our name on them too?

        "I see, ahh, your problem here maam. Your server rack is down a few pints. I'll top it off and put it on the lift and check the pump too."

  • Was I the only one who read the headline and immediately thought of some kind of under water data center. That would have been cool!
    • No, I thought they meant it was submerged as well, as in using the earth's water and soil as a heatsink. Sort of like those geothermal heating/cooling units some houses have. The deep water is always 67 deg F, so it warms in the winter and cools in the summer. Massively more efficient than conventional oil heat and electric AC. For all the attention Al Gore received for Global Warming, it was President Bush was has one of these in his Crawford ranch.

      Anyway, this is much less interesting. Oh well.
    • Nope, you're not the only one. I had a vision of sysadmins in SCUBA gear doing hardware swaps.

  • You'll obviously need to be scaling before you invest in a system that involves a big vat full of oil.

    Also, what does the fire marshall think of a big vat full of oil? Hazardous disposal? Oh boy... some company goes BK, and they leave behind a big vat full oil and outdated electronics.

    I didn't dig deep enough to see if they are actively pumping the oil or not. If they are, they're not doing it right. Any system that really cuts cooling costs should be using a LTD engine to transform the heat into useful

    • Re: (Score:3, Insightful)

      by Grishnakh ( 216268 )

      You don't need oil-air heat exchangers, oil vats, or anything of the kind. What you need is chilled WATER, which is already generated by cooling plants. Run this water to each server using simple pipes and a large pump for the whole facility, and then put an oil/water heat exchanger inside each chassis, along with a pump to circulate the oil.

      Is the efficiency going to be better? Maybe, maybe not, who cares. What's different is that cooling is much easier with 3/8" pipes of water rather than worrying abo

    • My initial thoughts were "Why on earth would you use the engine from an LTD [wikipedia.org]?"

      My ambiguous Wikipedia search revealed that you were in fact referring to a Stirling engine (aka. a low temperature difference engine).

    • So. That leads us to the questions: Is your overall system efficiency going to be better in some way by running hotter?

      As someone who has taken a class in electronics I can assure you that the efficiency of electronic equipment drops with increases in temperature as leaking currents are increasing. This may even lead to a thermal run-away situation.
      Running hot is also pretty bad as far as reliability goes.

    • The reason they use oil, or some fluorocarbon is that it doesn't conduct electricity, like water does. However, just because they have oil in the servers does not mean that they will be pumping oil out of the server room, or even out of the server itself, to cool it. One way you could do it is to oil cool each server in a rack using a rack mounted supply, then use a water system to cool the rack mounted supply. This is the way Iceotope does it.
    • Re: (Score:2, Interesting)

      by Zapo_Verde ( 1406221 )
      Most power transformers are oil cooled. In every substation there are a few big ones, and there are many smaller ones on pole tops or on the ground in suburbs. They pump the oil through the transformer and into a radiator that may or may not be fan cooled. If you build it right, sometimes you dont even need a pump, you can just use the changing density of oil as it heats to have it move itself through the loop. Cooling computers would use the same principle. Oil is a good insulator. There is a certain amou
  • I'm starting a pool. How much longer before the mainframe is re-invented to power cloud computing. I'm taking 1.5 years. Any other bets?
    • 2.5 but it will be a mainframe that is powered by GPU's

      • Sort of like installing little Linux LPARs and such. Very amusing.

        Mainframes are still the very best power/performance out there... and probably always will be :)

    • If you think about it, a "server farm" really isn't that different from a "mainframe"; it's a whole bunch of CPUs working in parallel, all packed into one room. The only real difference is that most server farms are implemented with separate OSes on each system, instead of a single OS for the whole thing, which is good for redundancy and partitioning but not so great for efficiency. It'd be a lot more simple and efficient if we just had one big OS for the whole system, with different users using different

    • by ebuck ( 585470 )

      Well, if you're starting a pool, throw in a cloud of servers and you'll be the pioneer.

      Come to think of it, I'll refrain from betting on this one, when you so poised to control the outcome, odds are I'll lose.

    • by julesh ( 229690 )

      I'm starting a pool. How much longer before the mainframe is re-invented to power cloud computing. I'm taking 1.5 years. Any other bets?

      Already happened. Seriously. How do you define "mainframe"? Let's look at the "characteristics" section of wikipedia's article on them:

      * ability to run (or host) multiple operating systems, and thereby operate not as a single computer but as a number of virtual machines

      It's quite common for any server type now to do this.

      * add or hot swap system capacity non disruptively

  • by JPerler ( 442850 ) on Thursday March 18, 2010 @03:32PM (#31528270) Homepage

    Hard disks aren't sealed, there's always (at least, on the dozens of disks I've taken apart) a little felt-pad or sticker covered vent on them. I figured it was for equalisation or something crazy, but I'm not positive.

    Given hard disks aren't sealed, wouldn't they fill with fluid and assuming they'd still function with a liquid screwing up the head mechanism (given modern disk's head's float above the platter surface on a cushion of air) wouldn't the increased viscosity slow down seek events?

    • Solid state disks.

      Essentially, if it has moving parts, it probably stays in air, and uses either conventional air cooling or contact non-submergence liquid cooling.

    • Re: (Score:3, Informative)

      by mnmoore ( 50459 )

      In the embedded video, they indicate that hard disks need to be wrapped in some material the vendor apparently provides, presumably for just this reason. Not sure how well the wrapping transfers heat.

    • Re: (Score:3, Interesting)

      by Grishnakh ( 216268 )

      No, the fluid would completely ruin the hard drive because they're not designed for that.

      There's two ways around this problem that I see:
      1) Use SSD disks instead of mechanical platter HDs.
      2) Use regular HDs, but do not submerge them in the cooling oil. Instead, put them in some type of aluminum enclosure which conducts the heat to the cooling oil, but keeps it from contacting the HD itself, sort of like what the water-cooling enthusiasts do for their hard drives today.

      And yes, I believe you're correct abou

    • by ebuck ( 585470 )
      Such issues could easily be solved by only submersing the compute nodes (which back to an external SAN for storage), encasing hard drives in airtight containers which have (heat) conductive contact with the drive body, or using newer SSDs to remove the need for an air cushion between your non-existent head and your non-existent platter.
  • by TheNinjaroach ( 878876 ) on Thursday March 18, 2010 @03:35PM (#31528352)
    Won't these servers bathed in oil still have the same thermal output? I don't understand why it would be cheaper to cool oil than it would air or any other medium..
    • Mineral oil has a thermal conductivity 5 times greater than air, and is much easier to pump around. I expect the difference in cp is similar but wikipedia doesn't list a value for oil.
    • Re: (Score:3, Interesting)

      by eh2o ( 471262 )

      Air actually has a very high thermal resistance so one needs to use forced circulation to actually transport moderate amounts of heat. Running all those fans uses more energy. In fact in any closed room, running a fan may cause objects immediately in front of the fan to be cooled, but overall the room is heating up from the power use.

      Oil has a very low thermal resistance naturally so one can use ordinary convection instead (up to some point).

      A less messy solution would be for servers to be made with integ

    • Re: (Score:2, Interesting)

      The company's website [grcooling.com] claims that it's easier to cool oil than to cool air. Their argument is that conventional air cooling requires 45 degree F air to keep components at 105 degree F, whereas the higher heat capacity of the oil lets it come out of the racks at 105F. The oil is hotter than ambient air (at least where I live), so it should be easier to remove its heat (through a heat exchanger) than to chill warm exhaust air back to 45F (through a refrigeration unit). Of course most components can run hot

    • Re: (Score:3, Interesting)

      by Grishnakh ( 216268 )

      It's not cheaper to cool oil. However, it's easier, because you can use oil-to-water heat exchangers, and cool the whole server farm with a chilled water plant (like A/C, but only chills water and never uses it to cool air). The benefit of this is that you don't have to worry about airflow, ductwork, and the like, and you can pack servers much more densely into a space than with air cooling. Since floor space in a facility like this is expensive, this saves money. It might also be more efficient to use

      • It might also be more efficient to use chilled water in pipes to cool the servers directly rather than chilling air and blowing that around a big building.

        Especially when it's free. I used to work at a medical center with a big data center. Cold city water was run first to the data center, heat-pumped to a cold-air Liebert, and then the slightly-warmer water was piped on to all the places where cold water is used. A degree or two warmer is quite fine at the tap.

        Smart downtown-City data centers would work

        • Won't work here in Phoenix. Here, in the summertime, there is no "cold" water faucet in your home; there's only "warm" and "hot". Many times, the "warm" faucet is just as hot as the "hot" one.

          Of course, I don't know what kind of idiot would locate a datacenter in Phoenix anyway. Except maybe Paypal.

          • Heh, that's funny. Fortunately fiber optics run to cold places pretty well.

            • Yep. I seriously don't know why all datacenters aren't located in northern climes. I guess many are probably in Calif. because of all the available talent. But there's no talent in Phoenix (what educated people were here have moved out or are in the process. The only ones left are all the zombies working at the local defense contractors like General Dynamics and Honeywell). Why Paypal is located here, I have no idea.

              I can't wait to move out of this town, in case it isn't obvious.

  • Oh..... there's something Google didn't think of and try.

  • Astute move. They're named "Green Revolution Cooling". Everyone knows you can't go wrong when you go "green".
  • by colordev ( 1764040 ) on Thursday March 18, 2010 @03:50PM (#31528612) Homepage
    A server with this [newegg.com] Intel Atom equipped mobo draws something like 25-35W under full load. And the performance of these D510 dual core processors is comparable [cpubenchmark.net] to better Pentium 4 processors.
    • by h4rr4r ( 612664 )

      So the future is going to be slow, really really slow?

      We keep quad socket quad Xeon boards at very high usage all the time. These things are not going to cut it.

      • You don't need bigger and bigger individual machines, if you have fast enough IO and your software engineers know WTF they are doing. There are alternative parallel algorithms for practically any problem you'd naively solve in a highly serial way. Given the right programming skill set, we could run just about any web app you care to imagine on a farm of SheevaPlugs (http://en.wikipedia.org/wiki/SheevaPlug). Kind of cute, don't you think?

        Why do you think places like Google and the big quant-heavy finance fi

    • A server with this [wikipedia.org] draws 7w under full load.
    • by afidel ( 530433 )
      Uh, no. The future is here and it is virtualization. I can have a VM for a fraction of that power and it can actually perform when it has work to do. For instance I'm running 53 VM's in 800W on 5 hosts for 15W per VM and those hosts aren't even in the least bit taxed, they should be able to support 3x as many VM's with minimal additional power bringing the eventual number closer to 5W per VM.
  • Mainframe (Score:5, Interesting)

    by snspdaarf ( 1314399 ) on Thursday March 18, 2010 @04:26PM (#31529196)
    I seem to remember mainframes using distilled water for cooling decades ago. Not being a member of the correct priesthood, I was not allowed in the mainframe room, so I don't know how it was set up then. I have seen how oil-filled systems work, and I would hate to work on one. Nasty mess.
    • Re: (Score:1, Informative)

      by Anonymous Coward

      Ah yes, the good old days...

      As I remember it, there were a couple of levels of coolant that were used to cool off a mainframe - some mystery liquid was pumped around through tubes that would flow by the chips needing cooling- it had all the necessary qualities, including being non-conductive in case of a leak. Then that liquid was pumped through a heat exchanger where the heat would get transferred to distilled water which was then pumped to some cooling unit (up on on the roof in our case).
      I still rememb

  • Comment removed based on user account deletion
    • I've been pondering that for a while, personally, I just don't see how to create an effective heatpipe from processor or machine to water pipe.
  • The Cray 2 had a three stage cooling system; the flourinert was pumped through a heat exchanger and dumped it's heat into chilled water, which was either provided by the site's existing HVAC infrastructure or (more likely, since the dissipation was in the Megawatt range) by a dedicated freon-based water chiller. The 5th generation Cray Inc (as opposed to CCC) also used immersion cooling in a similar vein. Many other Cray machines (YMP, C90 and so on used the same 3-stage cooling system, but the modules were
    • by julesh ( 229690 )

      this was ECL logic

      And there I was thinking they went straight from TTL logic to CMOS logic logic.

  • Submersion cooling using mineral oil isn't new, dating back to the use of Fluorinert in the Cray 2.

    Fluorinert [wikipedia.org] is not mineral oil [wikipedia.org], nor even very similar to mineral oil.

  • GRC's Mark Tlapak tells me that Iceotope's system is "beautiful but costly", while Iceotope's Peter Hopton dismisses GRC as "fishtank manufacturers".
    Basically, it looks like a simple solution (a bath) versus a more complex one (individual sealed blades). The discussion is here at eWEEK Europe UK [eweekeurope.co.uk].

    Peter Judge UK Editor, eWEEK Europe
  • The article describes a system where servers are stored in what is essentially a rack laid down on the ground and filled with oil. Now, this is going to be too heavy, I would have thought, to be able to support any off the ground, so you're limited to only using the bottom 60cm or so of each room in your datacenter for server storage. Isn't this going to mean you only get half as many servers in there?

  • I interviewed [youtube.com] these guys at SC09 for Linux Magazine. There are some close up shots of the servers in the oil.

  • Is it at all feasible to run a computer submerged in distilled water? You'd have to ensure that the water remains pure, obviously, but this might be easier than dealing with computers submerged in oil. The obvious advantage is that distilled water is more benign and MUCH easier to work with. Any spills can be cleaned up with a rag, for one thing.

Technology is dominated by those who manage what they do not understand.

Working...