Forgot your password?
typodupeerror
Power The Internet

The Risks and Rewards of Warmer Data Centers 170

Posted by samzenpus
from the not-so-chilly dept.
1sockchuck writes "The risks and rewards of raising the temperature in the data center were debated last week in several new studies based on real-world testing in Silicon Valley facilities. The verdict: companies can indeed save big money on power costs by running warmer. Cisco Systems expects to save $2 million a year by raising the temperature in its San Jose research labs. But nudge the thermostat too high, and the energy savings can evaporate in a flurry of server fan activity. The new studies added some practical guidance on a trend that has become a hot topic as companies focus on rising power bills in the data center."
This discussion has been archived. No new comments can be posted.

The Risks and Rewards of Warmer Data Centers

Comments Filter:
  • Re:What about HDDs? (Score:5, Informative)

    by DomNF15 (1529309) on Thursday October 22, 2009 @09:50AM (#29834597)
    No they didn't - what they did do is figure out that increased temperature is not correlated to higher failure rates - the failure rates don't magically decrease as it gets hotter.

    Here's the link for your review: http://hardware.slashdot.org/story/07/02/18/0420247/Google-Releases-Paper-on-Disk-Reliability [slashdot.org]
  • Re:What about HDDs? (Score:2, Informative)

    by jeffmeden (135043) on Thursday October 22, 2009 @09:59AM (#29834677) Homepage Journal

    Until what point? You can't consistently say "increase the temperature to decrease the MTBF".

    You'll end up with molten slag.

    Yes, you can. MTBF = mean time before/between failure. To decrease, reduce, lower, however you want to say it, it is going to fail SOONER meaning it is getting LESS reliable. That was the point, hotter temps = less reliability. Same goes for just about any physical/chemical process (fans, batteries, hard drive motors, etc.)

  • Re:Ducted cabinets (Score:2, Informative)

    by Cerberus7 (66071) on Thursday October 22, 2009 @10:13AM (#29834777)

    THIS. I was going to post the same thing, but you beat me to it! APC makes exactly what you're talking about. They call it "InfraStruXure." Yeah, I know... Anywho, here's a link to their page for this stuff [apc.com].

  • Turn fans down? (Score:2, Informative)

    by RiotingPacifist (1228016) on Thursday October 22, 2009 @10:21AM (#29834849)

    I used to have a Pentium 4 Prescott , the truth is processors can run significantly above spec (hell the thing would go above the "max temp" just opening notepad). It's already been shown that higher temps don't break HDD, are the downsides of running the processor a few degrees hotter significant or can they be ignored?

  • Re:Ducted cabinets (Score:3, Informative)

    by Linker3000 (626634) on Thursday October 22, 2009 @10:27AM (#29834915) Journal

    While I was looking at aircon stuff for our small room, I came across a company that sold floor-to-ceiling panels and door units that allowed you to 'box in' your racks and then divert your aircon into the construction rather than cooling the whole room. Seems like a sensible solution for smaller data centres or IT rooms with 1 or 2 racks in the corner of an otherwise normal office.

  • Re:Quick solution (Score:5, Informative)

    by jschen (1249578) on Thursday October 22, 2009 @10:39AM (#29835059)

    It is true that if you are producing X BTUs of heat inside the room, then to maintain temperature, you have to pump that much heat out. However, the efficiency of this heat transfer depends on the temperature difference between the inside and the outside. To the extent you want to force air (or any other heat transfer medium) that is already colder than outside to dump energy into air (or other medium) that is warmer, that will cost you energy.

    Also, too cold, and you will invite condensation. In your hypothetical scenario, you'd need to run some pretty powerful air conditioning to prevent condensation from forming everywhere.

  • Re:UNITS? (Score:3, Informative)

    by tepples (727027) <{moc.liamg} {ta} {selppet}> on Thursday October 22, 2009 @11:04AM (#29835373) Homepage Journal

    Fahrenheit backwards? That shit was metric before the Metric System even existed.

    To wit:

    0F is about as cold as it gets, and 100F is about as hot as it gets.

    You're right for the 40th parallel or so. But there are parts of the world that routinely dip below 0 deg F (-18 deg C) and other parts that routinely climb above 100 deg F (38 deg C). Things like that are why SI switched from Fahrenheit and Rankine to Celsius and Kelvin.

  • by Anonymous Coward on Thursday October 22, 2009 @11:06AM (#29835405)

    The use of SSDs in data centers can dramatically impact power usage and temperature management costs:

    "The power savings for the SSD-based systems is about 50 percent, and the overall cooling savings are 80 percent, according to the white paper. These savings are significant for a datacenter that spends 40 percent of its budget on power and cooling, and they're bound to make other datacenter operators sit up and take notice." http://arstechnica.com/business/news/2009/10/latest-migrations-show-ssd-is-ready-for-some-datacenters.ars

    While MTBF and unit cost are still concerns, the potential savings will likely see more centers moving in this direction.

  • by pckl300 (1525891) on Thursday October 22, 2009 @11:37AM (#29835861)
    I was at a Google presentation on this last night. If I remember correctly, I believe they found the 'ideal' temperature for running server hardware without decreasing lifespan to be about 45 C.
  • Re:Quick solution (Score:4, Informative)

    by Yetihehe (971185) on Thursday October 22, 2009 @12:43PM (#29836769)
    Condensation happens on surfaces colder than surrounding air. If you have computers which are warmer than your cooling air, it would not be a problem.
  • Re:Quick solution (Score:4, Informative)

    by billcopc (196330) <vrillco@yahoo.com> on Thursday October 22, 2009 @01:14PM (#29837201) Homepage

    You mean like Crays used to have ?

    The problems with water are numerous: leaks, evaporation, rust/corrosion, dead/weak pumps, fungus/algae, even just the weight of all that water can cause big problems and complicate room layouts.

    Air is easy. A fan is a simple device: it either spins, or it doesn't. A compressor is also rather simple. Having fewer failure modes in a system makes it easier to monitor and maintain.

    You also can't just "dispose of the hot water". It's not like you can leave the cold faucet open, and piss the hot water out as waste. Water cooling systems are closed loops. You cool your own water via radiators, which themselves are either passively or actively cooled with fans and peltiers. You could recirculate the hot water through the building and recycle the heat, but for most datacenters you'd still have a huge thermal surplus that needs to be dissipated. Heat doesn't just vanish because you have water, it only allows you to move it faster.

  • Re:Quick solution (Score:4, Informative)

    by Chris Burke (6130) on Thursday October 22, 2009 @07:04PM (#29841267) Homepage

    Nowhere on earth is so hot that servers won't run, unless you've built a server room over an active volcano or something.

    Given a sufficiently powerful fan, then yes.

    All we actually need to do is remove the heat from the servers to the air, and then keep swapping the air with the outside.

    Which becomes more difficult the higher the ambient air temperature becomes. Heat transfer is proportional to heat delta, so the closer the air temperature is to the heat sink temperature, the more air you need to blow to remove the same amount of heat. Eventually, the amount of electricity you are spending blowing air over the heat sinks is greater than the savings of using less AC.

    This was half the point of the article -- you can save a lot of money by raising server room temperatures, but eventually (at a temperature well below outdoor ambient around here) you actually start to lose money due to all the extra fan activity.

    Which happens automatically if you let heat out the top and air in the bottom.

    Yes but much too slowly to be of use. Convection is also proportional to temperature difference. By the time your server room temperature is enough higher than outside temperature to create significant airflow, your servers are toast.

    As for the first, I've always wondered why they don't use chimney-like devices to generate wind naturally and send it though server racks, instead of fans.

    Go ahead and try it. A lot of cases already have ducting that funnels air directly from outside the case to the CPU. A few more pieces of cardboard, a hole and chimney in the top of your case, and you should be ready to remove the fan and see what convection can do for you. Sneak preview: unless you've specifically picked components that can run off passive cooling, you'll be in the market for a new one. Especially if you live in a hot place and turn off your AC for this experiment.

    While its conceivable to have an effective server room based entirely off of low-power chips that require no active cooling, space is still a major concern in the server room. The desire for greater compute density is directly fighting against using a large number of low-power chips spread out. Thus performance/watt becomes a major metric for the server room, because they want the most performance for a fixed amount of space and thus cooling.

    why not have AC vent into server racks?

    That's actually a good idea, and a lot of places do it.

We have a equal opportunity Calculus class -- it's fully integrated.

Working...