Cooling Challenges an Issue In Rackspace Outage 294
miller60 writes "If your data center's cooling system fails, how long do you have before your servers overheat? The shrinking window for recovery from a grid power outage appears to have been an issue in Monday night's downtime for some customers of Rackspace, which has historically been among the most reliable hosting providers. The company's Dallas data center lost power when a traffic accident damaged a nearby power transformer. There were difficulties getting the chillers fully back online (it's not clear if this was equipment issues or subsequent power bumps) and temperatures rose in the data center, forcing Rackspace to take customer servers offline to protect the equipment. A recent study found that a data center running at 5 kilowatts per server cabinet may experience a thermal shutdown in as little as three minutes during a power outage. The short recovery window from cooling outages has been a hot topic in discussions of data center energy efficiency. One strategy being actively debated is raising the temperature set point in the data center, which trims power bills but may create a less forgiving environment in a cooling outage."
How to estimate the cooling needs? (Score:3, Interesting)
Re:Which only shows (Score:3, Interesting)
Re:Which only shows (Score:3, Interesting)
Re:How to estimate the cooling needs? (Score:5, Interesting)
Believe it or not, but in one of those "life coincidences", pi is a safe approximation. Take the number of watts your equipment, lighting, etc., use, multiply by pi, and that's the # of btus of cooling. Don't forget to include 100 watts per person for body heat.
It'll be 90F degrees outside, and you'll be a cool 66F.
Re:Which only shows (Score:3, Interesting)
New cooling strategy needed? (Score:5, Interesting)
The advantage of this is that even in the worst case scenario where the chillers fail totally during mid-summer there is no run-away, closed loop, self re-enforcing heat cycle, the data centre temperature will rise but it would do so more slowly and the maximum equilibrium temperature will be far lower (and dependant upon the external ambient temperature).
In fact, as part of the design for the cluster room in our new building I've specified such a system, though due to the maximum size of the ducting space available we can only use this for half the heat load.
Re:Which only shows (Score:4, Interesting)
Re:Which only shows (Score:3, Interesting)
Re:Which only shows (Score:4, Interesting)
For example, Chicago's primary datacenter facility is in 350 E. Cermak (right next to McCormick Place) and the primary interconnect facility in that building is Equinix (which has the 5th and now 6th floors.) A year or so ago there was a major outage there (that mucked up a good amount of the internet in the midwest) when a power substation caught on fire and the Chicago Fire Department had to shut off power to the entire neighborhood. So the backup system started like it should, with the huge battery rooms powering everything (including the chillers) for a bit while the engineers started up the generators. Only thing is, the circuitry that controls the generators shorted out, so while the generators themselves were working, the UPS was working, the chillers were working, this one circuit board blew at the WRONG moment. And this isn't the only time this circuit has been used, they test the generators every few weeks.
Long story short, once the UPSes started running out of power the chillers started going, lights flickered, and for a VERY SHORT period of time the chillers went out before all of the servers did. Within a minute or two it got well over 100 degrees in that datacenter. Thank god the power cut out as quick as it did.
So yes, Equinix in that case did everything by the book. They had everything setup as you would set it up. It was no big deal. But something went wrong at the worst time for it to go wrong and all hell broke loose.
It could be worse, your datacenter could be hit by a tornado [nyud.net]
Funny you mention this (Score:5, Interesting)
Short-cycling protection (Score:5, Interesting)
Most large refrigeration compressors have "short-cycling protection". The compressor motor is overloaded during startup, and needs time to cool. So there's a timer that limits the time between two compressor starts. 4 minutes is a typical delay for a large unit. If you don't have this delay, compressor motors burn out.
Some fancy short-cycling protection timers have backup power, so the the "start to start" time is measured even through power failures. But that's rare. Here's a typical short-cycling timer. [ssac.com] For the ones that don't, like that one, a power failure restarts the timer, so you have to wait out the timer after a power glitch.
The timers with backup power, or even the old style ones with a motor and cam-operated switch, allow a quick restart after a power failure if the compressor was already running. Once. If there's a second power failure, the compressor has to wait out the time delay.
So it's important to ensure that a data center's chillers have time delay units that measure true start-to-start time, or you take a cooling outage of several minutes on any short power drop. And, after a power failure and transfer to emergency generators, don't go back to commercial power until enough time has elapsed for the short-cycling protection timers to time out. This last appears to be where Rackspace failed.
Dealing with sequential power failures is tough. That's what took down that big data center in SF a few months ago.
Re:Why run data centres in hot states? (Score:5, Interesting)
There's several good reasons why the servers are located where they are, and not, say, in Alaska.
The main one is light speed through fiber, and a cable from Houston to Fairbanks would induce a best case of around 28 ms latency, each way. Multiply by several billion packets.
This is why hosting near the customer is considered a Good Thing, and why companies like Akamai have made it their business of transparently re-routing clients to the closest server.
Back to cooling. A few years ago, I worked for a telephone company, and the local data centre there had a 15 degree C ambient baseline temperature. We had to wear sweaters if working for any length of time in the server hall, but had a secure normal temperature room outside the server hall, with console switches and a couple of ttys for configuration.
The main reason why the temperature was kept so low was to be on the safe side -- even if a fan should burn out in one of the cabinets, opening the cabinet doors would provide adequate (albeit not good) cooling until it could be repaired, without (and this is the important part) taking anything down.
A secondary reason was that the backup power generators were, for security reasons, inside the server hall themselves, and during a power outage these would add substantial heat to the equation.
Re:How to estimate the cooling needs? (Score:3, Interesting)
The Prof in a box experiment has a large issue that contributes to error. He is breathing with a tube, the heat exchange in your lungs is a convection exchange and has too large a magnitude to ignore. If you have doubts about how much heat flows out through breathing next time you are cold in bed pull the covers up over your head and breath under the covers. You will find that the bed gets nice and warm in a very short time.
Re:Which only shows (Score:3, Interesting)
Fast forward to three weeks ago. The temp is fine, but the humidity keeps going down. I tell management, but this is a state agency and everything around here takes three times as long as it should. For a state agency, that's outstanding, by the way. Anyway, noting gets done. Then we find out WHY the humidity is going down: seems the HVAC monkey didn't screw in the water bottle all the way and the entire 5 ton fills up with water, until it shorts out at 4 pm on a Friday afternoon and dumps water everywhere.
Well, we got our four emergency portable coolers in with little tubes leading out into the hall, the fans on, and the doors open right quick, but the temp still shot up to over 100 in under ten minutes. Well, I told hem something was up, and anyway, I'm on the VMware/BladeCenter server consolidation team, and this is just more of an argument to fund us better. But I guess the moral of the story is, don't let slack-jawed mouth-breathing yokels fix your mission critical systems.
Re:Which only shows (Score:3, Interesting)
Oh and as far as the one leg collapsing thing, yes we were VERY pissed at everyone involved in that little problem, it turns out it was a design flaw in the transfer switch. Because it happened during the day we ended up taking more of an outage for replacement of the switch then we did from the incident but it just proves that even a well designed system can have problems. That datacenter was small enough to only have single source power, my current datacenter has dual feed including dual generator and fully redundant cooling so a single transfer switch malfunction wouldn't take it down but you have to work within the parameters set by budget and need.
Re:How to estimate the cooling needs? (Score:3, Interesting)
Power + Heat + Data Centers: a tough problem (Score:2, Interesting)
Disclaimer: I work with SGI, so I can shed some light on their customer's perspective (NASA, gov't, research labs, etc.) and solution to this problem.
The increasing density of servers is exacerbating the problem of power and cooling in every data center. This week is the SuperComputing trade show [supercomputing.org] where the the new top 500 supercomputers [top500.org] edition was released with "Big Turnover Among the Top 10 Systems," where you can see the first examples to address these issues.
SGI's new ICE blade system was launched a few months ago, it was designed to address the power consumption, real estate density, and cooling issues everyone will probably experience on their next server cycle. ICE has shipped and one installation is now #3 on the Top 500. It's a welcome sign that SGI is back from bankruptcy. I'm sorry if this seems like an advert, so I'm not going to link to SGI -- you can go find out more easily if you want.
Re:Which only shows (Score:3, Interesting)
I guarantee your HVAC systems are NOT on UPS power. If by some massive failure during construction and commissioning they were and it was missed, I'd recommend firing your entire engineering department and any development contractors involved with building and maintaining your facility. There is no reason to put HVAC systems (chillers, pumps, air handlers, CRACs) on UPS as they can all manage just fine with losing their power and restarting once power is restored (either from utility or generator). To subject your UPS system(s) to the massive inrush current that would occur when various HVAC component loads are thrust on it would be....well, stupid at best.
Your power systems sound pretty consistent with what is in most Data Centers (the "Essential Power" is often referred to as Emergency Power in Data Center environments). 30 seconds is a pretty good turnaround time for generators to start up, although 15 seconds is better (and very attainable).
So to answer your question, no, Data Centers do not have a "slacker" design than hospitals. They are actually quite similar in their requirements in terms of HVAC and of course power.
Re:Why run data centres in hot states? (Score:3, Interesting)
Additionally, you appear to be conflating the air temperature in the data centre (15C) with the temperature of the components. Since having a heat flux requires having a thermal gradient, then the components will be warmer than your heat sink.
In this town, we can tell the nationality of the boss of any office instantly on walking in - European bosses keep the HVAC (heating ventilation air-conditioning, or climate control) set to about 20C ; American bosses have it re-set to 25C (until over-ruled for wasting money). There's an Indian HVAC company (in Abu Dhabi), and a instrumentation engineer (last heard of in Houston, America) who need to be taught this lesson. Again. If you meet them, please apply the clue-bat before agreeing to take the equipment they design out to the Empty Quarter to rig it up. Your carbon dioxide flood for fire suppression would be as effectively lethal. Operators would need to be kept out of the controlled zone while enclosed generators are running; the fire suppression system should be overridden while operators are in the controlled zone, or you need to be rigged up with cascade air supplies and work-pack SCBA while working in the control zone. This isn't rocket science - there are plenty of corpses that point the way to proper management of work in potentially lethal atmospheres. (Of course, there are plenty of work places that like to cut corners and put their workers at risk. Don't work there and do report them to the relevant authorities.)