Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Power The Internet

The Risks and Rewards of Warmer Data Centers 170

1sockchuck writes "The risks and rewards of raising the temperature in the data center were debated last week in several new studies based on real-world testing in Silicon Valley facilities. The verdict: companies can indeed save big money on power costs by running warmer. Cisco Systems expects to save $2 million a year by raising the temperature in its San Jose research labs. But nudge the thermostat too high, and the energy savings can evaporate in a flurry of server fan activity. The new studies added some practical guidance on a trend that has become a hot topic as companies focus on rising power bills in the data center."
This discussion has been archived. No new comments can be posted.

The Risks and Rewards of Warmer Data Centers

Comments Filter:
  • Re:Quick solution (Score:3, Insightful)

    by autora ( 1085805 ) on Thursday October 22, 2009 @10:26AM (#29834907)
    I see you've really thought this one through... A warehouse full of servers that need regular maintenance filled with liquid nitrogen is sure to lower costs.
  • by mea37 ( 1201159 ) on Thursday October 22, 2009 @10:27AM (#29834913)

    "Sure, the fans kick in and you aren't saving as much, but are you still saving? I suspect you still are, there is a reason you are told to run ceiling fans in your house even with the AC on."

    If only someone would do a study based on real-world testing, we could be sure... Oh, wait...

    There are several differences between ceiling fans and server fans. You can't use one to make predications about the other. "Using one large fan to increase airflow in a room is a more efficient way for people to feel cooler than using AC to actually drop the temp a few extra degrees", but this does not imply that "running a bunch of little fans to individually increase heat sink efficiency in each of a number of computers would be moer efficient than just keeping the room cool enough for those heat sinks to do their job in the first place".

  • Longer Study (Score:4, Insightful)

    by amclay ( 1356377 ) on Thursday October 22, 2009 @10:37AM (#29835039) Homepage Journal
    The studies were not long enough to constitute a very in-depth analysis. It would have to be a multi-month, or up to a year to analyze all the effects of raising temperatures.

    For example, little was considered with:

    1) Mechanical Part wear (increased fan wear, component wear due to heat)

    2) Employee discomfort (80 degree server room?)

    3) Part failure*

    *If existing cooling solutions had issues, it would be a shorter time between the issue and additional problems since you have cut your window by ~15 degrees.
  • by greed ( 112493 ) on Thursday October 22, 2009 @10:48AM (#29835167)

    For starters, people sweat and computers do not. So, airflow helps cool people by increasing evaporation, in addition to direct thermal transfer. Even when you think you aren't sweating, your skin is still moist and evaporative cooling still works.

    Unless someone invents a CPU swamp cooler, that's just not happening on a computer. You do need airflow to keep the hot air from remaining close to the hot component (this can be convection or forced), but you don't get that extra... let's call it "wind chill" effect that humans feel.

  • by BenEnglishAtHome ( 449670 ) on Thursday October 22, 2009 @11:32AM (#29835809)

    I'm less concerned with the fine-tuning of the environment for servers than I am with getting the basics right. How many bad server room implementations have you seen?

    I'm sitting in one. We used to have a half-dozen built-for-the-purpose Liebert units scattered around the periphery of the room. The space was properly designed and the hardware maintained whatever temp and humidity we chose to set. They were expensive to run and maintain but they did their job and did it right.

    About seven years ago, the bean-counting powers-that-be pronounced them "too expensive" and had them ripped out. The replacement central system pumps cold air under the raised floor from one central point. Theoretically, it could work. In practice, it was too humid in here the first day.

    And the first week, month, and year. We complained. We did simple things to demonstate to upper management and building management that it was too humid in here, things like storing a box of envelopes in the middle of the room for a week and showing management that they had sealed themselves due to excessive humidity.

    We were, in every case, rebuffed.

    A few weeks ago, a contractor working on phone lines under the floor complained about the mold. *HE* got listened to. Preliminary studies show both penicillin (relatively harmless) and black (not so harmless) mold in high concentrations. Lift a floor tile near the air input and there's a nice thick coat of fluffy, fuzzy mold on everything. There's mold behind the sheetrock that sometimes bleeds through when the walls sweat. They brought in dehumidifiers that are pulling more than 30 gallons of water out of the air every day. The incoming air, depending on who's doing the measuring, is at 75% to 90% humidity. According to the first independent tester who came in, "Essentially, it's raining" under our floor at the intake.

    And the areas where condensation is *supposed* to happen and drain away? Those areas are bone dry.

    IOW, our whole system was designed and installed without our input and over our objections by idiots who had no idea what they were doing.

    So, my fellow server room denizens, please keep this in mind - When people (especially management types) show up with studies that support the view that the way the environment is controlled in your server room can be altered to save money, be afraid. Be very afraid. It doesn't matter how good the basic research is or how artfully it could be employed to save money without causing problems, by the time the PHBs get ahold of it, it'll be perverted into an excuse to totally screw things up.

  • by speculatrix ( 678524 ) on Thursday October 22, 2009 @12:28PM (#29836545)
    UPS batteries are sealed lead-acid and they definitely benefit from being kept cooler, it's also good to keep them in a separate room, usually close to your main power switching. As far as servers are concerned, I've always been happy with ab ambient room temp of about 22 or 23, provided air-flow is good so you don't get hot-spots, and it makes for a more pleasant working environment (although with remote management I generally don't need to actually work in them for long periods of time).
  • by wsanders ( 114993 ) on Thursday October 22, 2009 @12:49PM (#29836851) Homepage

    I am a little skeptical since most hard drive failures I've had have been right after a air conditioning outage. The Google paper uses temperature obtained from SMART, which is usually 10 to 15C higher than the ambient temperature in the room, and the tail of their sample falls off rapidly over 40C. What would the SMART temperature be if the ambient temperature was 40 or so? Probably 60 or above. Their graphs don't do that high.

    But we're talking raising the temperature of a data center only 2 or 3 deg. Meat lockers are not helpful. Moral of the story? Maybe spend your cooling bucks on your storage, then let the rest of your systems eat their exhaust. I have some new Juniper routers, no moving parts inside except fans - the yellow alarm doesn't kick off until 70C and the machine doesn't shut down until 85C.

  • by bill_kress ( 99356 ) on Thursday October 22, 2009 @01:38PM (#29837577)

    You left out what is usually the best part!

    For his valiant efforts in preventing waste did the bean counter get promoted to VP level or directly to an officer of the company? Or did he quit (get pushed out) and get a higher paying job elsewhere. This kind of stupidity never goes unrewarded.

  • by rbcd ( 1518507 ) on Thursday October 22, 2009 @05:39PM (#29840505)

    > We had about 40 hard drive failures and 12 power supply failures coming back up that evening.

    That could have just been due to an infrequent shut down. Hard drives are known for not being able to spin back up after being run for a very long time, for example.

The last person that quit or was fired will be held responsible for everything that goes wrong -- until the next person quits or is fired.

Working...