Green Grid Argues That Data Centers Can Lose the Chillers 56
Nerval's Lobster writes "The Green Grid, a nonprofit organization dedicated to making IT infrastructures and data centers more energy-efficient, is making the case that data center operators are operating their facilities in too conservative a fashion. Rather than rely on mechanical chillers, it argues in a new white paper (PDF), data centers can reduce power consumption via a higher inlet temperature of 20 degrees C. Green Grid originally recommended that data center operators build to the ASHRAE A2 specifications: 10 to 35 degrees C (dry-bulb temperature) and between 20 to 80 percent humidity. But the paper also presented data that a range of between 20 and 35 degrees C was acceptable. Data centers have traditionally included chillers, mechanical cooling devices designed to lower the inlet temperature. Cooling the air, according to what the paper originally called anecdotal evidence, lowered the number of server failures that a data center experienced each year. But chilling the air also added additional costs, and PUE numbers would go up as a result."
Re: (Score:2)
Can the baby seals insulate the data center?
Re:Translate this to (Score:5, Interesting)
Tree huggers telling an IT manager it's OK for his servers to burn up so save a baby seal.
Well, Google has already started running their data center much warmer than many data centers of the past, apparently with no ill effect.
It has nothing to do with hugging trees, simply hard nosed economics. If 5 degrees induces 3 more mother board failures in X number of months and you already have the fail-over problem handled it only takes a few seconds on a hand held calculator to figure out that trees have nothing to do with it.
The rules were written, as the article explaines, based on little if any real world data, designed for equipment that no longer exists, built with technology long since obsolete. It was probably never justified, and even if it was back in thr 70s and 80s, it isn't any more.
Google and Amazon and others have carefully measured real world data talen from bazillions of machines in hundreds of data centers. They know how to do the math.
Re:Translate this to (Score:5, Interesting)
Well, Google has already started running their data center much warmer than many data centers of the past, apparently with no ill effect.
This is an understatement. Google increased the temp in their data centers after discovering that servers in areas with higher temps had fewer hard errors. So they went with higher temps across the board, saved tons of money on lower utility bills, and have fewer hard errors.
Back in the 1950s, early computers used vacuum tubes, which failed often and were difficult to replace. So data centers were kept very cool. Since then, data centers have continued to be aggressively cooled out of tradition and superstition, with little or no hard data to show that it is necessary or even helpful.
Re: (Score:1)
I have been running several passive heatsink cooled servers for 5+ years on ambient temps that get as high as 85 F during the day while the AC is off and I'm at work. IMO, money is better spent on lower TDP components. Generally for server CPUs you have a choice of a lower TDP more $ cpu vs a higher TDP but cheaper CPU of similar power. The lower TDP CPU will use less electricity, plus generate less heat, which amounts to less cooling, so you theoretically cover the extra cost over time.
Additonally the l
Re: (Score:2)
My own experience with mostly-passively-cooled modern PCs is that while temperatures within remain low enough that everything continues to work fine on a hot day (if I switch
Re: (Score:2)
Well.. yes and no. Can you build servers that can take the heat? Sure. But that's not what most datacenters have. Sure, processors and maybe (and it's a big maybe) memory can take the heat... but in general, those 15K rpm disk drives are not going to like the extra heat. They have enough problems dissapating heat currently.
So.. possible, sure. But it does require some extra work. Your off the shelf HP, Dell or IBM, I wouldn't recommend it.
You do lower the life span of the equipment by placing it under
Re: (Score:2)
Re: (Score:2)
And, it would probably heat up any other rooms/offices it's next to.
If only there were some unused chiilers that could be used to cool the air next to the server room.
Re: (Score:2, Informative)
They aren't going to die of heatstroke in 95 degrees. Drama queen much?
Re: (Score:1)
Agreed. I'm not spending two hours moving, upgrading, whatever in a 35C/95F room.
Are building owners really overheating? (Score:2)
Re: (Score:3)
If the owners of the building could run cooler I would think they would.
Have a look here [datacenterknowledge.com] for more background.
Basically, they're describing four types of data centers. Have you seen the Google data centers with their heat curtains and all that? I surely don't work in any of those types of data centers. Some of the fancier ones around here have hot/cold aisles, but the majority are just machines in racks, sometimes with sides, stuck in a room with A/C. Fortunately it's more split systems than window un
Re: (Score:3)
A data room can get hot in a hurry without A/C and if you're running at 65, you get to 95 much less slowly than you do when you're running at 82.
That really depends on the size of your datacenter and your server load. If you've got a huge room with one rack in the middle, you're good to go. If you've got a 10x10 room with 2 or 3 loaded racks and your chiller goes tits up, you're going to be roasting hardware in a few short minutes. Some quick back-of-the-napkin calculations show that a 10x10x8 room with a single rack pulling all the juice it can from a 20 amp circuit will raise the temperature in the room about 10 degrees every 2 minutes. From 8
Re: (Score:2)
Component failures in a fail over designed world are no big deal. So what if your $80 cpu halts 6 additional times per cpu year? Toss that puppy in the scrap bin and slap in a new one. Commodity components used in massively parallel installations have different economics than the million dollar central processors used in the 80s.
Many have solved this another (better) way. (Score:1)
We looked for where the fibermap ran over a mountain range, and was near a hydroelectric plant. Our data center is cooled without chillers, simply by outside airflow 6 monhts of the year and with only a few hours use of chillers per day for another 3 months. I know this won'r help people running a DC in Guam, but for those who have a choice, locatiion makes a world of difference.
Late to the Party (Score:1)
Does it make a difference? (Score:2)
I am bad in physics so I might say something stupid. But does it actually make a difference? I feel like the temperature of the hot components are WAY over 20C. So whatever energy they output is what you need to compensate for. In the steady state you need to cool as much as they heat. Isn't that constant whatever the temperature the datacenter is run at?
Re: (Score:3)
Imagine that you used no cooling at all. The components wouldn't get infinitely hot; they'd get very hot, but the hotter they get the more readily the heat would escape, until they reach some steady state where they're hot enough that the heat escapes fast enough that it doesn't get any hotter.
So technically you're correct--a steady state always means that exactly the same amount of energy is being added and removed at the same time--but using cooling will allow this steady state to exist at lower temperat
Re: (Score:2)
Our server room is typically kept at 74 to 76 degrees. We've had a few close calls over the summer where the ambient temp got above 84 and some of the machines just up and froze or shut down (mostly the older gear... newer stuff does seem to handle heat better). As the room temp rises, the internal temperatures rise too - some processors were reporting temps near the boiling point.
Re: (Score:2)
Yes and no. If the room is properly insulated, any heat generated in the room will have to be forcefully removed. At some point, the room will reach equilibrium -- heat will escape at the rate it's generated, but it will be EXTREMELY hot in there by then. Rate of thermal transfer is dependant on the difference in temperature; the larger the difference, the faster energy transfers. Raising the temp of the room will lead to higher equipment temps; until you do it, you won't know if you've made the differe
What data centers did these guys look at? (Score:5, Informative)
I've been an operator and sysadmin for many years now, and I've seen this experiment done involuntarily a lot of times, in several different data centers. Trust me, even if you accept 35 C, the temperature goes well beyond that in a big hurry when the chillers cut out.
Re: (Score:2)
"the temperature goes well beyond that in a big hurry when the chillers cut out."
AND?
alternative:
SO?
If it's below 35 C outside, why wouldn't you just pump that air in(through filters)
You situation is most likely in room that are sealed to keep cool air in, so it traps the heat in. If the systems can run at 35 C, you would have windows. Worse case, open some windows and put a fan in.
Computer can run a lot hotter then they could 3 decades ago.
Re: (Score:1)
A) Temperature STABILITY!
B) Humidity.
The room is sealed and managed by precision cooling equipment because we want a precisely controlled, stable environment. As long as the setpoint is within human comfort, the exact point is less important than keeping it at that point! Google's data has shown, *for them*, 80F is the optimal point for hardware longevity. (I've not seen anywhere that it's made a dent in their cooling bill.)
Re: (Score:2)
I know some people that have tried to work out filtration systems that can handle the volume of air needed for a moderate size data center (so that outside air could be circulated rather than cooling and recirculating the inside air), and it quickly became as big of an expense as just running the A/C. Most data centers are in cities (because that's where the communications infrastructure, operators, and customers are), and city air is dirty.
Re: (Score:2)
Trust me, even if you accept 35 C, the temperature goes well beyond that in a big hurry when the chillers cut out.
Only because the chillers going out kills the ventilation at the same time. THAT is unhealthy. Cooling a datacenter through radiation is adventurous.
Re: (Score:2)
In my experience, blows/fans fail more often than compressors and pumps. :-) (blowers run constantly, compressors shouldn't)
The preference in data center cooling is/has been to use "free cooling" through water/glycol loops when the outside air is cold enough to handle heat rejection on it's own. Otherwise, compressors are used to push heat into the same loop. It's becoming more trendy to place data centers in cooler climates where compressors are never needed; then stability can be maintained by precise m
For electronic components, heat == death (Score:4, Informative)
Heat is death to computer hardware. Maybe not instantly, but it definitely causes premature failure. Just look at electrolytic capacitors, to name one painfully obvious component that fails with horrifying regularity in modern hardware. Fifteen years ago, capacitors were made with bogus electrolyte and failed prematurely. Some apparently still do, but the bigger problem NOW is that lots of items are built with nominally-good electrolytic capacitors that fail within a few months, precisely when their official datasheet says they will. A given electrolytic capacitor might have a design half-life of 3-5 years at temperatures of X degrees, but be expected to have 50/50 odds of failing at any time after 6-9 months when used at temperates at or exceeding X+20 degrees. Guess what temperature modern hardware (especially cheap hardware with every possible component cost reduced by value engineering) operates at? X+Y, where Y >= 20.
Heat also does nasty things to semiconductors. A modern integrated circuit often has transistors whose junctions are literally just a few atoms wide (18 is the number I've seen tossed around a lot). In durability terms, ICs from the 1980s were metaphorically constructed from the paper used to make brown paper shopping bags, and 21st-century semiconductors are made from a single layer of 2-ply toilet paper that's also wet, has holes punched into it, and is held under tension. Heat stresses these already-stressed semiconductors out even more, and like electrolytic capacitors, it causes them to begin failing in months rather than years.
Re: (Score:2)
define a lot? Cause I don't see it a lot, I've never read a report that would use the term 'a lot'.
And you are using 'Heat' in the most stupid way. Temperatures over a certain level will cause electronic to wear out faster, or even break. not 'heat is bad'.
Stupid post.
Re: (Score:2)
The article's central argument is that data centers can be run at higher temperatures. I'm pointing out that if you run your data center at higher temperatures to save on your energy costs, much or all of those savings could end up getting neutralized by premature equipment failure, and the cost of mitigating it.
Re:For electronic components, heat == death (Score:5, Insightful)
The article's central argument is that data centers can be run at higher temperatures. I'm pointing out that if you run your data center at higher temperatures to save on your energy costs, much or all of those savings could end up getting neutralized by premature equipment failure, and the cost of mitigating it.
Yet when Google analyzed data from 100,000 servers, they found failures were negatively correlated with temperature. As long as they kept the temp in spec, they had fewer hard errors at the high end of the operating temperature range. That is why they run "hot" data centers today.
I'll take Google's hard data over your gut feeling.
Re: (Score:2)
Re: (Score:2)
You completely missed the point and obviously didn't RTFA. The empirical evidence shows that datacentres can be run warmer than they typically are now with an acceptable increase in hardware failure - ie. bugger all. Increasing the temp in a massive datacentre by 5 degrees C will save a bundle of money/carbon emissions that far more than offsets the cost of replacing an extra component or two a month.
As impressive as your assertions are, they are just that - assertions. Reality disagrees with you.
Silly Enviromentalist.. (Score:4, Insightful)
Re: (Score:2)
Explain to me, again, why Facebook isn't dumping tons of money into a one-time investment into making Linux power management not suck? Or other companies, for that matter? Right, because it's an "accepted fact" that data centers must run at very high capacity all the time and power management efforts would hinder availability. And I presume this is *after* they dumped the money into Linux power management and saw it work out to be a colossal failure? Well, that's possible--they might have never bothered
Re: (Score:2)
I like that you assume corporation run everything perfectly and never make a mistake, or continue to dodo something based on an assumption.
It would be adorable if it wasn't so damn stupid.
Qui bono? (Score:4, Insightful)
The board of directors [wikipedia.org] of the "Green Grid" is composed almost entirely of the companies that would benefit if data centers had to buy more computing hardware more frequently, rather than continued paying for cooling equipment.
Re: (Score:2)
A Couple Degrees Warmer - Electronics Like Cold (Score:1)
Ignorant Article (Score:1)