Please create an account to participate in the Slashdot moderation system


Forgot your password?
Australia Google Power

When the Power Goes Out At Google 135

1sockchuck writes "What happens when the power goes out in one of Google's mighty data centers? The company has issued an incident report on a Feb. 24 outage for Google App Engine, which went offline when an entire data center lost power. The post-mortem outlines what went wrong and why, lessons learned and steps taken, which include additional training and documentation for staff and new datastore configurations for App Engine. Google is earning strong reviews for its openness, which is being hailed as an excellent model for industry outage reports. At the other end of the spectrum is Australian host Datacom, where executives are denying that a Melbourne data center experienced water damage during weekend flooding, forcing tech media to document the outage via photos, user stories and emails from the NOC."
This discussion has been archived. No new comments can be posted.

When the Power Goes Out At Google

Comments Filter:
  • by nacturation ( 646836 ) * <> on Monday March 08, 2010 @12:33PM (#31401700) Journal

    A new option for higher availability using synchronous replication for reads and writes, at the cost of significantly higher latency

    Anyone know some numbers around what "significantly higher latency" means? The current performance [] looks to be about 200ms on average. Assuming this higher availability model doesn't commit a DB transaction until it's written to two separate datacenters, is this around 300 - 400ms for each put to the datastore?

  • by filesiteguy ( 695431 ) <> on Monday March 08, 2010 @12:44PM (#31401820) Homepage
    i don't run a data center, but manage systems that rely on the data center 18 hrs/day 6 days/week. we pass upwards of $300m through my systems. I've yet to get a satisfactory answer as to exactly what would happen if - say - a water line breaks and floods all the electrical (including the dual redundant UPS systems) in the data center.
  • by mcrbids ( 148650 ) on Monday March 08, 2010 @01:12PM (#31402172) Journal

    Of COURSE there are people onsite. Most likely they have anywhere from a dozen to a hundred people onsite. But what's that going to do for you in the case of a large-scale problem?

    The otherwise top rated 365 Main [] facility in San Francisco went down a few years ago. They had all the shizz, multipoint redundant power, multiple data feeds, earthquake-resistant building, the works. Yet, their equipment wasn't well equipped to handle what actually took them down - a recurring brown-out. It confused their equipment, which failed to "see" the situation as one requiring emergency power, causing the whole building to go dark.

    So there you are, with perhaps 25 staff a 4-story building with tens of thousands of servers, the power is out, nobody can figure out why, and the phone lines are so loaded it's worthless. Even when the power comes back on, it's not like you are going to get "hot hands" in anything less than a week!

    Hey, even with all the best planning, disasters like this DO happen! I had to spend 2 wracking days driving to S.F. (several hours drive) to witness a disaster zone. HUNDREDS of techs just like myself carefully nursing their servers back to health, running disk checks, talking in tense tones on cell phones, etc.

    But what pissed me off (and why I don't host with them anymore) was the overly terse statement that was obviously carefully reviewed to make it damned hard to sue them. Was I ever going to sue them? Probably not, maybe just ask for a break on that month's hosting or something. I mean, I just want the damned stuff to work, and I appreciate that even in the best of situations, things *can* go wrong.

    So now I host with Herakles data center [] which is just as nice as the S.F. facility, except that it's closer, and it's even noticably cheaper. Redundant power, redundant network feeds, just like 365 main. (Better: they had redundancy all the way into my cage, 365 Main just had redundancy to the cage's main power feed)

    And, after a year or two of hosting with Herakles, they had a "brown-out" situation, where one of their main Cisco routers went partially dark, working well enough that their redundant router didn't kick in right away, leaving some routes up and others down while they tried to figure out what was going on.

    When all was said and done, they simply sent out a statement of "Here's what happened, it violates some of your TOS agreements, and here's a claim form". It was so nice, and so open, that out of sheer goodwill, I didn't bother to fill out a claim form, and can't praise them highly enough!

  • by mjwalshe ( 1680392 ) on Monday March 08, 2010 @01:16PM (#31402204)
    try hiring some staff with telco experiance instead of kids with a perfect GPA scores from stanford and design the fraking thing better !
  • Re:Don't they have (Score:4, Interesting)

    by Critical Facilities ( 850111 ) * on Monday March 08, 2010 @02:56PM (#31403554)
    First of all, the "flywheel generators" you're referring to are actually either standalone UPS systems or a part of a DRUPS (Diesel Rotary UPS). Here [] is some information on one of the leading manufacturers of such equipment.

    However, all of this is moot, since even if they had a flywheel setup as you're speculating, it still doesn't explain why 25% of the floor went down. If the equipment was installed, maintained and loaded properly, they should've been able to get to the generators with no problem.

    are you really telling me that you believe you and ElectricTurtle are smarter than the combined brainpower set loose by Google for building and maintaining this facility?

    No, I'm telling you that I manage a data center, and I know first hand how they work (or in this case, should work). I fail to see an adequate explanation of how this was unavoidable.

Loose bits sink chips.