Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Australia Google Power

When the Power Goes Out At Google 135

1sockchuck writes "What happens when the power goes out in one of Google's mighty data centers? The company has issued an incident report on a Feb. 24 outage for Google App Engine, which went offline when an entire data center lost power. The post-mortem outlines what went wrong and why, lessons learned and steps taken, which include additional training and documentation for staff and new datastore configurations for App Engine. Google is earning strong reviews for its openness, which is being hailed as an excellent model for industry outage reports. At the other end of the spectrum is Australian host Datacom, where executives are denying that a Melbourne data center experienced water damage during weekend flooding, forcing tech media to document the outage via photos, user stories and emails from the NOC."
This discussion has been archived. No new comments can be posted.

When the Power Goes Out At Google

Comments Filter:
  • by alen ( 225700 ) on Monday March 08, 2010 @12:15PM (#31401480)

    aren't there any people in the data center to tell them that yes there has been a power outage, so and so machines are affected, etc? sounds like all they have is remote monitoring and if something happens than someone has to drive to the location to see what's wrong

  • Read the comments (Score:5, Insightful)

    by RaigetheFury ( 1000827 ) on Monday March 08, 2010 @12:22PM (#31401574)

    I pity EvilMuppet. Guy is a tool. There are contractual agreements that are in place to prevent pictures, aka the "rules" but when the data center blatantly LIES they are breaking the trust and violating the agreement. Case Law exists where contracts can be violated when one accuses the other of violating said contract.

    That's what happened. The data center was lying about what happened to avoid responsibility for the equipment it was being paid to host. Pictures were taken and are being used to prove the company did violate the trust of the contract.

    You can argue the semantics and legality of it but if this goes to court the pictures will be admissible and the data center will lose.

  • by Anonymous Coward on Monday March 08, 2010 @12:28PM (#31401638)
    Even a cloud isn't effective if all the nodes go down, it's not magic.
  • by bjourne ( 1034822 ) on Monday March 08, 2010 @12:40PM (#31401768) Homepage Journal

    App Engine must be Googles absolutely most poorly run project. It has been suffering from outages almost weekly (the status page [google.com] doesn't tell the whole truth unfortunately), unexplainable performance degradations, data corruption (!!!), stale indexes and random weirdness for as long as it has been run. I am one of those who tried for a really long time to make it work, but had to give up despite it being Google and despite all the really cool technology in it. I pity the fool who pays money for that.

    The engineers who work with it are really helpful and approachable both on mailing lists and irc, and the documentation is excellent. But it doesn't help when the infrastructure around it is so flaky.

  • ISO9001 (Score:1, Insightful)

    by Anonymous Coward on Monday March 08, 2010 @12:43PM (#31401814)

    This should be standard practice... It's like the good bits of ISO9001 with a bit more openness. When done right, ISO9001 is a good model to follow.

  • by Anonymous Coward on Monday March 08, 2010 @12:49PM (#31401876)

    Epic fail.

    Any data center worth it's weight in dirt, must have UPS devices sufficient to power all servers plus all network and infrastructure equipment, as well as the HVAC systems too, for a minimum of at least 2 full hours on batteries, in case the backup generators have difficulty in getting started up and online.

    Any data center without both adequate battery-UPS systems plus diesel (or natural gas or propane powered) generators is a rinky-dink, mickey-mouse amateur operation.

  • by nedlohs ( 1335013 ) on Monday March 08, 2010 @01:12PM (#31402160)

    Who cares?

    Power failures are expected, what you can do is have plans for when they occur - batteries, generators, service migration to other sites, etc, etc. Those plans (and the execution of them) are what they had problems with.

  • by dburkland ( 1526971 ) on Monday March 08, 2010 @01:18PM (#31402228)

    Keith Olbermann, is that you!?

    Fixed that for you

  • by hedwards ( 940851 ) on Monday March 08, 2010 @01:28PM (#31402382)
    My parents once lost power for several hours because a crow got fried in one of the transformers down the street. People around here lose power from time to time when a tree falls on a line. Unplanned power outages are going to happen. Even though line reliability is probably higher now than at any time in the past, it still happens and companies like Google that rely upon it being always there should have plans.

    This isn't just about keeping the people that use Google services informed, this is an admission that there's something to fix and that they're going to fix what they can. There isn't any particular reason why they need to disclose such plans beyond being a huge player and not wanting to scare away the numerous people that count on them for important work.
  • by hedwards ( 940851 ) on Monday March 08, 2010 @01:29PM (#31402404)
    That's the downside, anytime you acknowledge a mistake you're then looking like you have more than the idiots that have hundreds of mistakes that they don't disclose until caught making.
  • by Tynin ( 634655 ) on Monday March 08, 2010 @01:46PM (#31402656)
    You are so cute. I know very little about UPS systems, but when I was working in a datacenter that housed 5000 servers we had a two story room that was twice the size of most houses (~2000 sq ft) with rows and rows of batteries. I was told that in the event of a power outage, we had 22 minutes of battery power before everything went out. The idea of having enough for 2 hours would have been one an interesting setup considering how monstrously large this one already was. Besides, I'm unsure why you'd ever need more than that 22min since that is plenty of time for our on site staff to gracefully power down any of our major servers if the backup generator failed to kick in.
  • by Critical Facilities ( 850111 ) * on Monday March 08, 2010 @02:24PM (#31403100)

    The otherwise top rated 365 Main [365main.com] facility in San Francisco went down a few years ago. They had all the shizz, multipoint redundant power, multiple data feeds, earthquake-resistant building, the works. Yet, their equipment wasn't well equipped to handle what actually took them down - a recurring brown-out. It confused their equipment, which failed to "see" the situation as one requiring emergency power, causing the whole building to go dark.

    I think you made the right decision in changing providers. I remember that story about the 365 outage, and while I am too lazy to look up the details again, I recall it being as you're telling it. To that end, I'd simply say that they most certainly did have the proper equipment to handle the brown out, but obviously not the proper management. If you're having regular (if intermittent) power problems (brown outs, phase imbalances, voltage harmonic anomolies, spikes, etc), just roll to generator, that's what they're there for.

    I'm sick of people making the assumption that the operators of the facility were just at the mercy of a power quality issue because they have redundant power feeds and automatic transfer switches. Yes, in a perfect world, all the PLCs will function as designed, and the critical load will stay online by itself. However, it takes some foresight and some common sense sometimes to make a decision to mitigate where necessary. I direct all my guys to pre-emptively transfer to our generators if there are frequent irregularities on both of our power feeds (i.e. during a violent thunderstorm, simultaneous utility problems, etc).

    In other words, I'm agreeing with you that the service you received was unacceptable. Along with that (and in rebuttal to the parent post), I'm saying that it's not enough to talk about how they came back from the dead, but why they got there in the first place.

  • floods (Score:2, Insightful)

    by zogger ( 617870 ) on Monday March 08, 2010 @02:34PM (#31403272) Homepage Journal

    Did you ever actually see a big flood? Freaking awesome power, like a fleet of bulldozers. Smashes stuff, rips houses off foundations, knocks huge trees over, will tumble multiple ton boulders ahead of it, etc. Just depends on how big the flood is. We had one late last year here, six inches of rain in a couple of hours, just tore stuff up all over. The "building" that can withstand a flood of significant size exists, it is called a submarine. Most buildings of the normal kind just aren't designed to deal with anything that destructive. Some can resist minor floods, but not too many.

  • Re:Don't they have (Score:3, Insightful)

    by DragonWriter ( 970822 ) on Monday March 08, 2010 @05:32PM (#31405662)

    There's more to this story than is being told, and instead, they're focusing on how they came back online rather than why they went offline in the first place.

    That's because they are focussing on what went wrong. Power losses, including ones that take down the whole data center, are accepted risks and part of the reason they have a redundant data centers and failover procedures.

    The failure wasn't that they had a partial loss at a datacenter. The failure was that the impact of that loss wasn't mitigated properly by the systems that were supposed to be in place to do that.

  • by Richard_at_work ( 517087 ) on Monday March 08, 2010 @05:48PM (#31405900)
    Let me add my own little story, which happened back in the good old days of June 2009.

    The company had spent the past year rearchitecting the entire IT infrastructure, as the complete core application suite for the business was, other than your standard peripheral utilities like Office et al, green screen based, using a proprietary language from the early 1980s that was barely still maintained and wasn't going anywhere fast.

    It was my job to handle the systems infrastructure side of the deal, while another team handled software development and I was way ahead of them - the core business applications were still in the planning stages while the infrastructure to handle and host them was well advanced. The platform we chose was well designed, with onsite redundancy built into the base cost and easily scalable - dare I say it myself, it was a good job. The only thing I had no hand in on the hardware side was the actual building infrastructure, as we had moved to custom built offices about 5 years prior, and there was someone else on the team that handled telecoms and the building. But we had a UPS and a generator, so all seemed well in the world.

    Alongside the new infrastructure came the new business continuity plan. Well, I say 'new' - I can't really say there was an 'old' BCP. Sure, we rented space at a major BC facilities provider, but there had never been any test, and there wasn't even any written documentation as to what to do.

    Here is where I must admit my first failure - the BCP was not treated as an integral, tied-in-like-a-knot part of the infrastructure, it was a separate project running alongside. Sure, the new infrastructure was designed to take a local server failure through redundancy, or even allow ease of moving to an offsite location. That part of it was all in place. My failure was in not ensuring that the offsite location actually existed as the new infrastructure grew.

    However, by the start of 2009, the basic infrastructure needs of the BCP were well known, costed and presented to the company board of directors. And there it sat. Every month I would ask them if it had been signed off, if I could spend the money. Every month I received a negative answer, it just hadn't been discussed at these busy directors meetings.

    And that was my second failure. I had no sponsor in those meetings, there was basically no IT representation (the IT director had resigned after the modernisation was pushed through, he wanted no part in it as he had not been taking the business forward himself). With no sponsor, no one wanted to raise the potential spending of a hundred thousand pounds themselves. And so it sat.

    Then one day in June, we had a routine fan replacement on the UPS. The engineer was signed in, did the replacement under the watchful eye of a senior helpdesk technician, and flipped the UPS back from maintenance bypass to full protected mains. And that was when the first bang happened.

    And all the lights went dark. All the whirring stopped. All the phones stopped ringing. All the people stopped talking.

    It was blissfully quiet for a few precious seconds. And then it was painfully quiet for about another 5. And then all hell broke loose.

    The core business applications did not fair well. The 30 year old architecture essentially had no failsafe for database writes, and as the server had quit in the midst of several thousand writes, we knew we had just lost a significant amount of data.

    Its worth taking several seconds out to explain how the core application language does its job. Firstly, there is no database server, its all C-ISAM datafiles directly read from and written to by each individual application. Locks are handled by each application internally, with OS level locking preventing concurrent writes to the same record in the data file. No database engine, no transaction logging, no roll backs, no error correction, nothing. There was nothing in the language to protect those poor l
  • by vakuona ( 788200 ) on Monday March 08, 2010 @07:37PM (#31407618)
    Cheap doesn't mean not properly designed! Google doesn't do redundancy on a micro scale. For them it's pointless. In fact, from what I know, Google knows their hardware will fail, so they have written their software to handle hardware failures gracefully. When something like this happens, they write a report, and get someone about to work out a fix so that the outage doesn't recur.

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...