Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Earth Hardware IT

Green Grid Argues That Data Centers Can Lose the Chillers 56

Nerval's Lobster writes "The Green Grid, a nonprofit organization dedicated to making IT infrastructures and data centers more energy-efficient, is making the case that data center operators are operating their facilities in too conservative a fashion. Rather than rely on mechanical chillers, it argues in a new white paper (PDF), data centers can reduce power consumption via a higher inlet temperature of 20 degrees C. Green Grid originally recommended that data center operators build to the ASHRAE A2 specifications: 10 to 35 degrees C (dry-bulb temperature) and between 20 to 80 percent humidity. But the paper also presented data that a range of between 20 and 35 degrees C was acceptable. Data centers have traditionally included chillers, mechanical cooling devices designed to lower the inlet temperature. Cooling the air, according to what the paper originally called anecdotal evidence, lowered the number of server failures that a data center experienced each year. But chilling the air also added additional costs, and PUE numbers would go up as a result."
This discussion has been archived. No new comments can be posted.

Green Grid Argues That Data Centers Can Lose the Chillers

Comments Filter:
  • If the owners of the building could run cooler I would think they would. Heat is expensive and building owners are cheap; if it is possible to spend less I would think that owners would.
    • If the owners of the building could run cooler I would think they would.

      Have a look here [datacenterknowledge.com] for more background.

      Basically, they're describing four types of data centers. Have you seen the Google data centers with their heat curtains and all that? I surely don't work in any of those types of data centers. Some of the fancier ones around here have hot/cold aisles, but the majority are just machines in racks, sometimes with sides, stuck in a room with A/C. Fortunately it's more split systems than window un

      • A data room can get hot in a hurry without A/C and if you're running at 65, you get to 95 much less slowly than you do when you're running at 82.

        That really depends on the size of your datacenter and your server load. If you've got a huge room with one rack in the middle, you're good to go. If you've got a 10x10 room with 2 or 3 loaded racks and your chiller goes tits up, you're going to be roasting hardware in a few short minutes. Some quick back-of-the-napkin calculations show that a 10x10x8 room with a single rack pulling all the juice it can from a 20 amp circuit will raise the temperature in the room about 10 degrees every 2 minutes. From 8

  • We looked for where the fibermap ran over a mountain range, and was near a hydroelectric plant. Our data center is cooled without chillers, simply by outside airflow 6 monhts of the year and with only a few hours use of chillers per day for another 3 months. I know this won'r help people running a DC in Guam, but for those who have a choice, locatiion makes a world of difference.

  • November 2012 Wired covers "hot" machine rooms in its paean to Google's data centers. Usually by the time they've picked up a story, it's done.
  • I am bad in physics so I might say something stupid. But does it actually make a difference? I feel like the temperature of the hot components are WAY over 20C. So whatever energy they output is what you need to compensate for. In the steady state you need to cool as much as they heat. Isn't that constant whatever the temperature the datacenter is run at?

    • by Jiro ( 131519 )

      Imagine that you used no cooling at all. The components wouldn't get infinitely hot; they'd get very hot, but the hotter they get the more readily the heat would escape, until they reach some steady state where they're hot enough that the heat escapes fast enough that it doesn't get any hotter.

      So technically you're correct--a steady state always means that exactly the same amount of energy is being added and removed at the same time--but using cooling will allow this steady state to exist at lower temperat

    • Our server room is typically kept at 74 to 76 degrees. We've had a few close calls over the summer where the ambient temp got above 84 and some of the machines just up and froze or shut down (mostly the older gear... newer stuff does seem to handle heat better). As the room temp rises, the internal temperatures rise too - some processors were reporting temps near the boiling point.

    • by Cramer ( 69040 )

      Yes and no. If the room is properly insulated, any heat generated in the room will have to be forcefully removed. At some point, the room will reach equilibrium -- heat will escape at the rate it's generated, but it will be EXTREMELY hot in there by then. Rate of thermal transfer is dependant on the difference in temperature; the larger the difference, the faster energy transfers. Raising the temp of the room will lead to higher equipment temps; until you do it, you won't know if you've made the differe

  • by Chris Mattern ( 191822 ) on Friday October 26, 2012 @05:40PM (#41784123)

    I've been an operator and sysadmin for many years now, and I've seen this experiment done involuntarily a lot of times, in several different data centers. Trust me, even if you accept 35 C, the temperature goes well beyond that in a big hurry when the chillers cut out.

    • by geekoid ( 135745 )

      "the temperature goes well beyond that in a big hurry when the chillers cut out."
      AND?
      alternative:
      SO?

      If it's below 35 C outside, why wouldn't you just pump that air in(through filters)

      You situation is most likely in room that are sealed to keep cool air in, so it traps the heat in. If the systems can run at 35 C, you would have windows. Worse case, open some windows and put a fan in.
      Computer can run a lot hotter then they could 3 decades ago.

      • by Cramer ( 69040 )

        A) Temperature STABILITY!
        B) Humidity.

        The room is sealed and managed by precision cooling equipment because we want a precisely controlled, stable environment. As long as the setpoint is within human comfort, the exact point is less important than keeping it at that point! Google's data has shown, *for them*, 80F is the optimal point for hardware longevity. (I've not seen anywhere that it's made a dent in their cooling bill.)

      • by Burdell ( 228580 )

        I know some people that have tried to work out filtration systems that can handle the volume of air needed for a moderate size data center (so that outside air could be circulated rather than cooling and recirculating the inside air), and it quickly became as big of an expense as just running the A/C. Most data centers are in cities (because that's where the communications infrastructure, operators, and customers are), and city air is dirty.

    • by amorsen ( 7485 )

      Trust me, even if you accept 35 C, the temperature goes well beyond that in a big hurry when the chillers cut out.

      Only because the chillers going out kills the ventilation at the same time. THAT is unhealthy. Cooling a datacenter through radiation is adventurous.

  • by Miamicanes ( 730264 ) on Friday October 26, 2012 @05:41PM (#41784145)

    Heat is death to computer hardware. Maybe not instantly, but it definitely causes premature failure. Just look at electrolytic capacitors, to name one painfully obvious component that fails with horrifying regularity in modern hardware. Fifteen years ago, capacitors were made with bogus electrolyte and failed prematurely. Some apparently still do, but the bigger problem NOW is that lots of items are built with nominally-good electrolytic capacitors that fail within a few months, precisely when their official datasheet says they will. A given electrolytic capacitor might have a design half-life of 3-5 years at temperatures of X degrees, but be expected to have 50/50 odds of failing at any time after 6-9 months when used at temperates at or exceeding X+20 degrees. Guess what temperature modern hardware (especially cheap hardware with every possible component cost reduced by value engineering) operates at? X+Y, where Y >= 20.

    Heat also does nasty things to semiconductors. A modern integrated circuit often has transistors whose junctions are literally just a few atoms wide (18 is the number I've seen tossed around a lot). In durability terms, ICs from the 1980s were metaphorically constructed from the paper used to make brown paper shopping bags, and 21st-century semiconductors are made from a single layer of 2-ply toilet paper that's also wet, has holes punched into it, and is held under tension. Heat stresses these already-stressed semiconductors out even more, and like electrolytic capacitors, it causes them to begin failing in months rather than years.

    • by geekoid ( 135745 )

      define a lot? Cause I don't see it a lot, I've never read a report that would use the term 'a lot'.

      And you are using 'Heat' in the most stupid way. Temperatures over a certain level will cause electronic to wear out faster, or even break. not 'heat is bad'.
      Stupid post.

      • The article's central argument is that data centers can be run at higher temperatures. I'm pointing out that if you run your data center at higher temperatures to save on your energy costs, much or all of those savings could end up getting neutralized by premature equipment failure, and the cost of mitigating it.

        • by ShanghaiBill ( 739463 ) on Friday October 26, 2012 @08:13PM (#41785565)

          The article's central argument is that data centers can be run at higher temperatures. I'm pointing out that if you run your data center at higher temperatures to save on your energy costs, much or all of those savings could end up getting neutralized by premature equipment failure, and the cost of mitigating it.

          Yet when Google analyzed data from 100,000 servers, they found failures were negatively correlated with temperature. As long as they kept the temp in spec, they had fewer hard errors at the high end of the operating temperature range. That is why they run "hot" data centers today.

          I'll take Google's hard data over your gut feeling.

          • by tlhIngan ( 30335 )

            The article's central argument is that data centers can be run at higher temperatures. I'm pointing out that if you run your data center at higher temperatures to save on your energy costs, much or all of those savings could end up getting neutralized by premature equipment failure, and the cost of mitigating it.

            Yet when Google analyzed data from 100,000 servers, they found failures were negatively correlated with temperature. As long as they kept the temp in spec, they had fewer hard errors at the high end

    • You completely missed the point and obviously didn't RTFA. The empirical evidence shows that datacentres can be run warmer than they typically are now with an acceptable increase in hardware failure - ie. bugger all. Increasing the temp in a massive datacentre by 5 degrees C will save a bundle of money/carbon emissions that far more than offsets the cost of replacing an extra component or two a month.

      As impressive as your assertions are, they are just that - assertions. Reality disagrees with you.

  • by Severus Snape ( 2376318 ) on Friday October 26, 2012 @05:42PM (#41784157)
    Yes, it's generally in the nature of these companies to spend unneeded money. They hire people who's exact job is to make data centers' as efficient as possible. Even to the extent Facebook and others are open sourcing their information to try and get others involved to improve data center design. I say generally as I'm sure most seen the story on here recently over Microsoft wasting energy to meet a contract target, that however is a totally different kettle of fish.
    • Explain to me, again, why Facebook isn't dumping tons of money into a one-time investment into making Linux power management not suck? Or other companies, for that matter? Right, because it's an "accepted fact" that data centers must run at very high capacity all the time and power management efforts would hinder availability. And I presume this is *after* they dumped the money into Linux power management and saw it work out to be a colossal failure? Well, that's possible--they might have never bothered

    • by geekoid ( 135745 )

      I like that you assume corporation run everything perfectly and never make a mistake, or continue to dodo something based on an assumption.

      It would be adorable if it wasn't so damn stupid.

  • Qui bono? (Score:4, Insightful)

    by J'raxis ( 248192 ) on Friday October 26, 2012 @06:45PM (#41784827) Homepage

    The board of directors [wikipedia.org] of the "Green Grid" is composed almost entirely of the companies that would benefit if data centers had to buy more computing hardware more frequently, rather than continued paying for cooling equipment.

  • You can go a couple degrees warmer than in the "old days" (ten years ago). Things like bearings in fans and drives will fail. Capacitors will fail. Data centers produce LOTS of heat. I don't believe that the coin counters figured in the staff to replace the failed parts or the extra staff and time needed when manual procedures are used due to a downed system.
  • Computers crash/fail when overheating and in a datacenter that can happen very fast. You absolutely must keep the temperatures from getting too hot. Some datacenters can get away with minimal cooling. Some datacenters need chillers and tons of money invested in keeping things at a low enough temperature where computers wont randomly lock up on you from the heat. There must be some datacenters who have too much cooling but to say that datacenters in general dont need them demonstrates a lack of un

C'est magnifique, mais ce n'est pas l'Informatique. -- Bosquet [on seeing the IBM 4341]

Working...