Forgot your password?
typodupeerror
Power The Internet

The Risks and Rewards of Warmer Data Centers 170

Posted by samzenpus
from the not-so-chilly dept.
1sockchuck writes "The risks and rewards of raising the temperature in the data center were debated last week in several new studies based on real-world testing in Silicon Valley facilities. The verdict: companies can indeed save big money on power costs by running warmer. Cisco Systems expects to save $2 million a year by raising the temperature in its San Jose research labs. But nudge the thermostat too high, and the energy savings can evaporate in a flurry of server fan activity. The new studies added some practical guidance on a trend that has become a hot topic as companies focus on rising power bills in the data center."
This discussion has been archived. No new comments can be posted.

The Risks and Rewards of Warmer Data Centers

Comments Filter:
  • Possible strategy (Score:4, Interesting)

    by Nerdposeur (910128) on Thursday October 22, 2009 @09:38AM (#29834507) Journal

    1. Get a thermostat you can control with a computer
    2. Give the computer inputs of temperature and energy use, and output of heating/cooling
    3. Write a program to minimize energy use (genetic algorithm?)
    4. Profit!!

    Possible problem: do we need to factor in some increased wear & tear on the machines for higher temperatures? That would complicate things.

  • UNITS? (Score:2, Interesting)

    by RiotingPacifist (1228016) on Thursday October 22, 2009 @10:02AM (#29834701)

    80 whats? Obviously they mean 80F (running a temperature at 80K, 80C or 80R would be insane), but you should always specify units (especially if your using some backwards units like Fahrenheit!)

  • Ducted cabinets (Score:3, Interesting)

    by tom17 (659054) on Thursday October 22, 2009 @10:04AM (#29834707) Homepage
    So what about having ductwork as another utility that is brought to each individual server? Rather than having thousands of tiny inefficient fans whirring away, you could have a redundant farm of large efficient fans that pull in cool air from outside (cooling only required then in hot climates or summer) and duct it under the floor in individual efficient ducts to each cabinet.Each cabinet would then have integral duct-work that would connect to individual servers. The servers would then have integral duct-work that would route the air to the critical components. There would have to be a similar system of return-air duct-work that would ultimately route back to another redundant farm of large efficient fans that scavenge the heated air and dump it outside.

    I realise that this is not something that could be done quickly, it would require co-operation from all major vendors and then only if it would actually end up being more efficient overall. There would be lots of hurdles to overcome too... Efficient ducting (no jagged edges or corners like int domestic HVAC ductwork), no leaks, easy interconnects, space requirements, rerouting away from inactive equipment etc etc etc.You would still need some ac in the room as there is bound to be heat leakage from the duct-work, as well as heat given off from less critical components, but the level of cooling required would be much less if the bulk of the heat was ducted straight outside.

    So I know the implementation of something like this would be monumental, requiring redesigning of servers, racks, cabinets and general DC layout. It would probably require standards to be laid out so that any server will work in any cab etc (like current rackmount equipment is fairly universally compatible), but after this conversion, could it be more efficient and pay off in the long run?

    Just thinking out loud.

    Tom...

  • by EmagGeek (574360) <gterich@ a o l . c om> on Thursday October 22, 2009 @10:08AM (#29834749) Journal

    Well, if you have a large cluster, you can load balance based on CPU temp to maintain a uniform junction temp across the cluster. Then all you need to do is maintain just enough A/C to keep the CPU cooling fans running slow (so there is excess cooling capacity to handle a load spike since the A/C can only change the temp of the room so quickly)

    Or, you can just bury your data center in the antarctic ice and melt some polar ice cap directly.

  • Re:Possible strategy (Score:4, Interesting)

    by Linker3000 (626634) on Thursday October 22, 2009 @10:20AM (#29834843) Journal

    Interestingly enough, I recently submitted an 'Ask Slashdot' (Pending) about this as my IT room is also the building's server room (just one rack and 5 servers) and we normally just keep the windows open during the day and turn on the aircon when we close up for the night, but sometimes we forget and the room's a bit warm when we come in the next day! We could just leave the aircon on all the time but that's not very eco-friendly.

    I was asking for advice on USB/LAN-based temp sensors and also USB/LAN-based learning IR transmitters so we could have some code that sensed temperature and then signalled to the aircon to turn on by mimicking the remote control. Google turns up a wide range of kit from bareboard projects to 'professional' HVAC temperature modules costing stupid money so I was wondering if anyone had some practical experience of marrying the two requirements (temp sensor and IR transmitter) with sensibly-priced, off-the-shelf (in the UK) kit.

    Anyone?

  • by Yvan256 (722131) on Thursday October 22, 2009 @10:24AM (#29834887) Homepage Journal

    If you save enery by having warmer data centers, but that it shortens the MTBF, is it really that big of a deal?

    Let's say the hardware is rated for five years. Let's say that running it hotter than the recommended specifications shortens that to three years.

    But in three years, new and more efficient hardware will probably replace it anyway because it will require, let's say, 150 watts instead of 200 watts, so the old hardware would get replaced anyway because the new hardware will cost less to run in those lost two years.

  • by amorsen (7485) <benny+slashdot@amorsen.dk> on Thursday October 22, 2009 @10:53AM (#29835213)

    But in three years, new and more efficient hardware will probably replace it anyway because it will require, let's say, 150 watts instead of 200 watts

    That tends to be hard to get actually, at least if we're talking rack-mountable and if you want it from major vendors.

    Rather you get something 4 times as powerful which still uses 200W. If you can then virtualize 4 of the old servers onto one of the new, you have won big. If you can't, you haven't won anything.

  • Re:UNITS? (Score:4, Interesting)

    by nedlohs (1335013) on Thursday October 22, 2009 @11:08AM (#29835437)

    And yet the temperature here measured in F gets negative every winter. And where I previously lived it got above 100F every summer (and it also does where I am now, but only a day or three each year).

    But in both those places a temperature of 0C was the freezing point of water, and 100C the boiling point. Yes that 100C one isn't so useful in terms of daily temperature, the 0C is though since whether water will freeze or not is the main transition point in daily temperature.

  • Re:Move to Canada (Score:3, Interesting)

    by asdf7890 (1518587) on Thursday October 22, 2009 @11:19AM (#29835613)

    I know it was meant as a joke, but moving to colder climates may not be such a bad idea. Moving to a northern country such as Canada or Norway, you would benefit from the colder outside temperature, in the winter, to keep the servers cool and then any heat produced could be funnelled to keeping nearby buildings warm.

    There has been a fair bit of talk about building so-call "green" DCs in Iceland, where the lower overall temperatures reduce the need for cooling (meaning less energy used, lowering operational costs) and there is good potential for powering the things mainly with power obtained from geothermal sources.

    There was also a study (I think it came out of Google) suggesting that load balancing over an international network, like Google's app engine or similar, be arranged so that when there is enough slack to make a difference more load is passed the DCs that are experiencing more wintery conditions than the others. It makes sense for applications where the extra latency of the server perhaps being the other side of the world some of the time isn't gonig to make much difference to the users.

  • Re:Quick solution (Score:3, Interesting)

    by lobiusmoop (305328) on Thursday October 22, 2009 @11:29AM (#29835767) Homepage

    Data centers would be much more efficient if blade servers had modular water cooling instead of fans. Water is much better at transferring heat than air. Then you could just remove all the fans from the data center and add a network of water pipes (alongside the spaghetti of network and power cabling) around the data center. Then just pump cold water in and dispose of the hot water (pretty cheap to do). Should be reasonable safe too really - the water should only be near low-voltage systems really (voltage stepdown should really be happening at a single point in an efficient data center, not at every rack).

  • Risk of AC failure (Score:5, Interesting)

    by Skapare (16644) on Thursday October 22, 2009 @12:09PM (#29836315) Homepage

    If there is a failure of AC ... that is, either Air Conditioning OR Alternating Current, you can see a rapid rise in temperature. With all the systems powered off, the latent heat inside the equipment, which is much higher than the room temperature, emerges and raises the room temperature rapidly. And if the equipment is still powered (via UPS when the power fails), the rise is much faster.

    In a large data center I once worked at, with 8 mainframes and 1800 servers, power to the entire building failed after several ups and downs in the first minute. The power company was able to tell us within 20 minutes that it looked like a "several hours" outage. We didn't have the UPS capacity for that long, so we started a massive shutdown. Fortunately it was all automated and the last servers finished their current jobs and powered off in another 20 minutes. In that 40 minutes, the server room, normally kept around 17C, was up to a whopping 33C. And even with everything powered off, it peaked at 38C after another 20 minutes. If it weren't so dark in there I think some people would have been starting a sauna.

    We had about 40 hard drive failures and 12 power supply failures coming back up that evening. And one of the mainframes had some issues.

  • Re:Quick solution (Score:4, Interesting)

    by DavidTC (10147) <{slas45dxsvadiv. ... } {neverbox.com}> on Thursday October 22, 2009 @02:03PM (#29837991) Homepage

    I think we're spending way too much time trying to 'cool' things that do not, in fact, need to be cooler than outside. Nowhere on earth is so hot that servers won't run, unless you've built a server room over an active volcano or something.

    All we actually need to do is remove the heat from the servers to the air, and then keep swapping the air with the outside.

    Which happens automatically if you let heat out the top and air in the bottom. Even if you have to condition the incoming air to remove moisture, that's cheaper than actually 'cooling' AC. So the second part, replacing the room air, is easy.

    As for the first, I've always wondered why they don't use chimney-like devices to generate wind naturally and send it though server racks, instead of fans. I think all the heat in a server room could actually, on exit, suck incoming air in fast enough to cool computers if it actually hit the right places on the way in.

    Heck, this would apply anyway. Instead of having AC vent into server rooms, why not have AC vent into server racks? Hook up the damn AC to the fan vent on each server, blow cold air straight in. The room itself could end not cold at all.

The F-15 Eagle: If it's up, we'll shoot it down. If it's down, we'll blow it up. -- A McDonnel-Douglas ad from a few years ago

Working...