British Airways Says IT Collapse Came After Servers Damaged By Power Problem (reuters.com) 189
A huge IT failure that stranded 75,000 British Airways passengers followed damage to servers that were overwhelmed when the power returned after an outage, the airline said on Wednesday. From a report: BA is seeking to limit the damage to its reputation and has apologised to customers after hundreds of flights were canceled over a long holiday weekend. The airline provided a few more details of the incident in its latest statement on Wednesday. While there was a power failure at a data center near London's Heathrow airport, the damage was caused by an overwhelming surge once the electricity was restored, it said. "There was a total loss of power at the data center. The power then returned in an uncontrolled way causing physical damage to the IT servers," BA said in a statement. "It was not an IT issue, it was a power issue."
Not IT... Riiiight... (Score:5, Insightful)
Pretty sure UPS's and backup power supplies kinda do fall under that...
Redundant System (Score:5, Insightful)
Re: Redundant System (Score:5, Funny)
Re: (Score:2)
Re:Redundant System (Score:5, Interesting)
This. The BA outage is the second most hilariously inept cause of an outage I've ever seen, after a local government office that was down for over a week because one rackmount server was dropped in transit.
Re: Redundant System (Score:2)
What about the RBS banking outage?
https://en.m.wikipedia.org/wik... [wikipedia.org]
Re: (Score:2)
"most hilariously inept" covers it well.
Re: (Score:2)
I know a utility that saved a little money on the power that ran the electric natural gas pumps for their biggest generator (pumps were in another utilities area) by putting the pumps on a curtailable contract (like 'peak corps' and other programs, that cut off your AC power when demand is highest).
So that means, when the demand was highest, that utility shutdown the supply pumps to PG&^H^H^H a large utilities biggest generator. Brilliant!
Re: (Score:2)
Re: (Score:2)
Doesn't everyone use closed transition (make before break) transfer switches these days? Failing that even with shit batteries I'd think a break before make transfer switch should be able to be absorbed by weak batteries on a double conversion UPS, or by power supplies on the computer hardware running with UPS in bypass (or a standby UPS).
I have seen the following oddities with emergency power devices (oddly all Eaton):
~10 years ago an Eaton(IIRC) closed transition transfer switch with a firmware bug. There
Re: (Score:2)
"... failed in the blackout because of a fuel pump issue"
http://www.news.com.au/nationa... [news.com.au]
Re: (Score:2)
The powercos in the area have come out and categorically stated there was no form of power hit, dip or other problem on the public side of the meters.
Re: (Score:3)
I agree that they should do it, but it is unlikely that the one-off cost of implementing always-on redundant systems would be this cheap, the scale and scope of the IT systems involved in the airline industry is enormous and it's likely it would cost significantly more than that. There are also ongoing costs to consider. Source: Work in software development, have seen projects in organisations way smaller and simpler t
Re: (Score:2)
Ticketing and scheduling systems are not life-safety critical, therefore they don't get the budget for double redundancy. It's aerospace-think come to the company comptroller's office, imposed on IT that made this failure happen.
Also, that "£100 million estimated cost of this incident" is less than the development, rollout and ten years of additional maintenance costs of a full double redundant geographically diverse scheduling system. For an industry that can't even make a baggage sorting system wo
Re: (Score:2)
"Ticketing and scheduling systems are not life-safety critical"
Loading, aircraft balancing (centre of gravity) and fuel load calculations ARE.
All of these were affected. Plus BA's entire VOIP system.
Re: (Score:3)
It is pretty clear that BA leadership screwed up massively here and yes, it is most decidedly an IT problem. The described power-outage scenario is a complete standard one and competent planning prepares for it. Now they are trying to misdirect (i.e. lie) in order to make it appear like this was a natural disaster and of course, they could not have done anything about that. Dishonorable, untrue, but nicely demonstrates the defective characters of the people in power at BA.
The only right thing to do is kick
Re: (Score:2)
...which is also an IT issue.
Re: (Score:2)
Then its not DR, one of the core facets of DR includes testing it to ensure recovery is actually recovery.
Re:Redundant System (Score:5, Funny)
Re: (Score:3)
It's always fun trying to sell a DR failover test to a 24/7 company.
- So what's this for?
- To make sure you can recover quickly in the event of a disaster.
- What if that fails, worst case?
- Well, your warm site fails to take over, so you have a planned outage now instead of an unplanned outage later.
- How long an outage?
- Well, if we fail to bring up the warm site, and fail to fall back to the current production site, there may be some lost transactions and we'd need to shut down long enough to make sure th
Re:Not IT... Riiiight... (Score:5, Informative)
Not to mention fail over to alternative sites.
These are transparent lies. The real issue is well known now, but it's unconformable for all involved so they're making stuff up.
Re:Not IT... Riiiight... (Score:5, Funny)
Well, India has a notoriously unreliable electrical grid.
Re:Not IT... Riiiight... (Score:5, Insightful)
Well, India has a notoriously unreliable electrical grid.
If the power goes down daily or weekly, you learn to deal with it, and your backup generators and fail-over systems become robust. If power goes done once a decade, it causes bigger problems.
HOLY CRAP (Score:3)
Re: (Score:3)
When they _fire_ the CEO, CTO and Director of IT. They should publicly announce 'It wasn't a management issue, it was power.'
it's not our DC so we don't deal with the power (Score:4, Funny)
it's not our DC so we don't deal with the power part it's the DC that we outsourced to that does the power part.
Re: Not IT... Riiiight... (Score:3)
Re: (Score:3)
I am pretty sure it was the lack of in this case. Even if a power surge happens, PDU/UPS pretty much handle any power related issues. This sounds more like someone was dinking around in the data center and pulled/shorted the wrong wire(s). Even if this did not happen, PDU/UPS equipment was designed to prevent what happened, so yea it WAS AN IT PROBLEM.
Power of the almighty dollar (Score:5, Informative)
We all know that this outage was caused by bad faith outsourcing to unqualified persons. Who are they kidding?
https://www.theguardian.com/bu... [theguardian.com]
Oh yeah, power surges are to blame! haha no.
Re: (Score:2)
Re:Power of the almighty dollar (Score:5, Insightful)
A proper IT staff would have built in safeguards against power outages and power surges.
For a company the size of British airways I would expect that they would have a hot fail over in a different country. Or at least a different geographic location.
In short they cheeped out on IT and now they are paying for it.
Re:Power of the almighty dollar (Score:5, Insightful)
This is what happens when you treat your IT staff like your Janitorial staff.
Re:Power of the almighty dollar (Score:4, Interesting)
And yet, if you laid your janitorial staff off you'd up to your neck in filth and garbage in no time at all.
Management who don't rise through the ranks typically have absolutely no respect for the work that 'the ranks' perform.
Re: (Score:2)
This is true.
Re:Power of the almighty dollar (Score:4, Insightful)
An ill-considered plan to save a few dimes has cost them several dollars.
The CEO should have foreseen this and should be let go. As should other executives who approved the offshoring plan.
Offshoring can work- but excessive staffing cuts to save a few extra dollars are begging for something like this to happen.
Infrastructure people should be located on site with the hardware and there should be multiple hardware systems *with* fail over testing on a monthly basis. (not quarterly. that fails. only monthly is often enough that the failover is seamless and there is a good argument for doing a daily failover.)
BIG DC power systems are not really IT guys more (Score:3)
BIG DC power systems are not really IT guys more like Infrastructure / electricians and some of that stuff is not easy swap even more so if an fail safe tripped and killed all power.
Re:BIG DC power systems are not really IT guys mor (Score:4, Interesting)
It is if it is set up and administered right.
we did monthly failovers between different physical sites. A blown DC at one site wouldn't have made a difference.
Our failovers involved a couple hours of oncall for about 150 staff. Most the time only a half dozen were working but a couple times a year it would involve most the staff (and a lot of it people) for part of that. A database would be out of sync or messed up and that would fall to the IT staff to fix. It became less common over time.
Did you miss that they fixed the power problems and then the IT systems were messed up for a long time afterwards indicating poor disaster planning and low staff skill.
A company as big as BA, should have had a separate failover site and been doing regular failovers.
Re: (Score:2)
This is what happens when you outsource, you lose control of what is outside your grasp and you take them at face value.
CEO who did the right thing would have been pushed out because he was costing the shareholders too much money before that
Re: (Score:2)
Why do you think it took them this long to come out with an explanation?
"It wasn't me, it was the one armed man!" (Score:2, Insightful)
"It was not an IT issue, it was a power issue."
Assuming it was not a lightning strike, It's still your fuckup if "power issues" can damage/take down your IT.
Re:"It wasn't me, it was the one armed man!" (Score:5, Insightful)
Yep.
We have a Caterpillar generator the size of a schoolbus (and given its coloring I've had to restrain myself from sticking a stop-sign on the side as a prank) and a sophisticated transfer switch with power monitoring. When we lose power the batteries hold the DC over until the generator kicks in, and then when power is restored we do not switch back to grid immediately. I am not the person that deals with the power, but as I understand it, the generator and transfer switch monitors the grid for some time before switching back to grid, and there are power conditioners in between. On top of that, the system monitors grid power continuously and will intentionally island the system if there's a significant enough fault.
This is not for something as critical as an airline's control system either. I do not find any reasonable excuse to blame power; you're supposed to assume that power is dirty and unreliable and to work around it.
Re:"It wasn't me, it was the one armed man!" (Score:5, Interesting)
Sounds great...when it works. I bet you've never looked at the code that controls a big automated transfer switch. I have. It's a mess. It's so bad that the very first install Eaton did with our new model, which was in Digital Forest in Tukwila, WA near Seattle, we had three failures in the first ninety days due to bad software. It shut an entire data center down even though utility power was not down, battery power good, and generator working. The guy we dispatched the third time had spent two years in Uganda so he was experienced with bad power. He claimed that power from Seattle City Light was worse than Uganda. The power was so bad that the software in the ATS decided to disconnect everything.
The second time power was restored, because of the bad software, it switched to generator power before the generator was running fully. The voltage dropped and took out quite a few older pieces of equipment and stalled the engine. In other words, the opposite problem BA had.
Re: (Score:3)
The guy we dispatched the third time had spent two years in Uganda so he was experienced with bad power. He claimed that power from Seattle City Light was worse than Uganda. The power was so bad that the software in the ATS decided to disconnect everything.
Probably true. When the first grid-tie inverters were invented, they kept shutting themselves off because as it turned out, the utilities were totally incapable of producing power as clean as they claimed they were, and as they were demanding that the inverter provide. Making better power than utilities in the US is trivial.
Re: (Score:2)
Strange, in the last place I worked with a big DC, they regularly tested the generator (I think monthly, and even from floors away you could *hear* it), and UPS systems. In my five years there, I'd not heard of an outage due to any of the many power failures in our area.
Re:"It wasn't me, it was the one armed man!" (Score:5, Interesting)
I worked in a center that had a big diesel-powered UPS unit the size of a shipping container. It was there about 3 years before we had a power outage. It detected it and span up, engaged the clutch and ... the drive belt snapped. Oops. Under voltage. So rev faster. Still undervoltage, so MOAR revs. Now, in addition to the power outage we've got a big UPS that's on fire.
Re: (Score:2)
Re: (Score:3)
We test monthly. It's also a way to replenish the fuel before it becomes nonviable.
Re: (Score:2)
Sometimes Murphy is still against you.
A power station I did some work at had a 20MW emergency generator (old jet engine) to kick things off (conveyors and crushers require a lot of juice) and it was tested monthly for around 25 years and maintained carefully. The only time it was needed (due to a fairly rare set of circumstances) it didn't work. A second one was installed later as a backup to the backup but neither was needed again for the remaining life of the power station.
I think tho
Re:"It wasn't me, it was the one armed man!" (Score:5, Insightful)
Until your voltage regulator starts dying and only gives your equipment 80volts and no one notices the under voltage condition during normal maintenance and testing of the generator.
The facilities maintenance people test the generators monthly, but it was not standard practice to test the voltage every single time the generator was tested.
It is now.
But the point is that systems fail in all sorts of fun ways in the real world. You learn, you change, you adapt, as im sure BA is doing. All it takes is one major incident to stop people from dragging their feet. I'm sure that is occurring now at british airlines.
Re: (Score:2)
Yes, and by outsourcing to someone who has not learned your lessons you have to get through all those mistakes a second time.
Re: (Score:2)
[..]sophisticated transfer switch with power monitoring[..]
Those break. Way more than they should. Often with interesting results that aren't just "power went off".
And you fundamentally can't make them redundant. You can have two of them on completely separate feeds of course, feeding into different power supplies on the servers. That sometimes helps, except when the overvoltage is sufficiently great to get through the protections of the power supply.
Re:"It wasn't me, it was the one armed man!" (Score:5, Insightful)
I am not the person that deals with the power, but as I understand it, the generator and transfer switch monitors the grid for some time before switching back to grid, and there are power conditioners in between.
I used to design the diesel engines used in some of those systems, and have seen them in use. Although your system may monitor the grid to ensure reliability, it's most likely making sure it's not switching between two power sources that are out of phase.
When we would connect one of our gensets to the power grid, we had to match the phase before we could close the switches. To do this, the engine speed was modified to run the generator at slightly above or below the frequency of the grid. If the phase wasn't matched, the power grid would try to force the generator into phase suddenly. It's assumed the power available from the grid is infinite in these types of systems. Therefore an incredible amount of current would flow through the generator and also provide a mechanical jerk [wikipedia.org] to the engine if the switches were closed out of phase. Something will break in a spectacular fashion if this isn't done carefully.
Honestly, this could be what happened to BA.
Re:"It wasn't me, it was the one armed man!" (Score:5, Interesting)
And sometimes **it happens.
I worked as a Senior Network Engineer for a large national backbone provider to the US DOE. At the facilities we owned WE were in charge of oversight of the power system and regular testing. We had one experienced power engineer on staff to oversee everything, though the facility's plant engineering people did all of the actual heavy work.
Back in 2009 we had just completed our annual full transfer test where we switched over to UPS, let the generator fire up, transferred to generator power, and then reversed the process. Everything worked perfectly. The following week we lost power. UPS kicked in, but the generator refused to start. One week earlier everything worked perfectly in the test case where we could have backed out before UPS died. No such luck that day. Our staff lost the ability to monitor the network and the laboratory where we were located lost Internet connectivity as did several other smaller facilities in the area. Took us about an hour to get a trailered generator in place and get things back on-line.
No matter how carefully you plan and test, sometime you still lose.
Re: (Score:2)
Yep.
We have a Caterpillar generator the size of a schoolbus (and given its coloring I've had to restrain myself from sticking a stop-sign on the side as a prank) and a sophisticated transfer switch with power monitoring. When we lose power the batteries hold the DC over until the generator kicks in, and then when power is restored we do not switch back to grid immediately. I am not the person that deals with the power, but as I understand it, the generator and transfer switch monitors the grid for some time before switching back to grid, and there are power conditioners in between. On top of that, the system monitors grid power continuously and will intentionally island the system if there's a significant enough fault.
This is not for something as critical as an airline's control system either. I do not find any reasonable excuse to blame power; you're supposed to assume that power is dirty and unreliable and to work around it.
That is how it is done. It is well-known that power often comes back up "unclean" after a failure.
Re: (Score:2)
Depends. Unfortunately airline stock tends to perform almost regardless of what an airline does simply because when people need to travel there are only so many options and among all airlines across the planet there are only so many seats going so many directions. As long as people want or need to travel the airlines will generate revenue, even those that make terrible mistakes or do terrible things to passengers from time to time, so long as they manage to get flying again.
Re: (Score:2)
A proper IT infrastructure can deal with a direct lightning strike as well.
Re: (Score:2)
A proper IT infrastructure can deal with a direct lightning strike as well.
At what cost? I doubt it's worth it for most businesses. There are too many disasters to plan for: lightening, flood, earthquake, tornado, high winds, several combined. It's probably impossible to protect against everything unless you have Federal Government money.
I've yet to see a surge suppression system that's affordable to a mid-scale business that can take a direct hit, anyway. Plus you get EM induced voltage that fries networking and other stuff, including the power system control circuitry. I've seen
Re: (Score:2)
It is in fact a standard scenario.
Not an IT Issue (Score:2, Insightful)
It absolutely is an IT issue if you cannot automatically recover from power events in a single data center...
Direct cause (Score:5, Insightful)
The power surge was the direct cause. The fundamental cause was the failure of management to ensure they had an appropriate disaster recovery plan.
No, Where we REALLY screwed up was this: (Score:2)
And the difference is...?
Anyone whose Server Farm can be brought down from a power outage does NOT know what they are doing, or care enough about it to bother.
How would this 'admission' make anyone more comfortable about this business?
Re:No, Where we REALLY screwed up was this: (Score:5, Insightful)
How would this 'admission' make anyone more comfortable about this business?
The business doesn't have to worry about that. It's safe regardless; too-big-to-fail public+private yada yada. This is BA we're talking about.
These "stories" are just the public narrative writing process, guided to affix/deflect blame to/from the appropriate parties as the scapegoats are singled out. The BA execs know they have maybe 72 hours or so before this story falls out of the news cycle so they're using that window to make the headlines they need to muddy the waters. Until now the only narrative that has had any play is the "outsourcing did it" one, and that hits too close to management, so they're making this stuff up and putting it out through their MSM channels.
It _was_ an IT issue (Score:5, Insightful)
Re:It _was_ an IT issue (Score:5, Informative)
BA has a DR site independent of the primary that suffered the power issue. But volume groups were not being mirrored correctly to the DR site. When they brought the DR site online, they were getting 3 or more destinations when scanning boarding passes. And since the integrity of the DR site was an issue, it could not be used.
Then the only option is to fix the primary DC, which would have involved installing new servers / routers / switches / etc, configuring them, restoring the data to the last known good state and then bringing it back online. Good luck to anyone trying to deploy new/replacement equipment en masse during the chaos of a disaster. And then restoring data!
Takes days, not hours... unlike whatever RTO/RPO they claimed to be able to meet.
Re:It _was_ an IT issue (Score:5, Insightful)
Okay, they weren't flaming incompetents that didn't have a failover site. They were flaming incompetents that had a failover site that didn't work, because apparently they never tested it. Glad we cleared that up.
Re: (Score:2)
But volume groups were not being mirrored correctly to the DR site.
Ok I know I'm late to the party but that tells me that it was an IT issue right there.
Next excuse.... (Score:4, Funny)
Re: (Score:2)
dont really think you can blame this on unions.... whats your agenda? Most of the time issues like this are caused by management thinking "we already spent X million dollars for server clusters on one site. But it costs X much more for each server to have dual power supplies, then X much more for each DataCenter power bus and redundant backups, then you telling me we have to spend X times 2 for an additional DataCenter?!?!?! and testing etc etc. I thought this is what a HA cluster is for!"
Re: (Score:2)
Umm yeah, reread it indeed WOOSH... I'm going to blame it on Obama... Thanks Obama!
ID10Ts (Score:3)
Re:ID10Ts (Score:5, Insightful)
Unfortunately it will probably take several shocks like this, and some high level careers ending as a result, before they start to wise up.
But what *was* an IT issue... (Score:2)
was the fact that you apparently have no redundancy on extremely mission-critical servers.
You have *got* to be kidding... (Score:2)
Every server wasn't connected to a UPS? And the return of the power overwhelmed the UPSes?
And just how did management decide to "save money" on the power for the servers?
Re: (Score:2)
No, they appear to be saying that turning the power on somehow damaged the computers:
"The power then returned in an uncontrolled way causing physical damage to the IT servers"
I don't believe this. The CEO is just protecting his own ass after outsourcing IT.
Re: (Score:2)
No, they appear to be saying that turning the power on somehow damaged the computers:
"The power then returned in an uncontrolled way causing physical damage to the IT servers"
Right, which says that the servers weren't connected to UPSes. Because if they were, then the UPS would have filtered the power surge.
Re: (Score:3)
If I read this right, they are claiming that putting a huge load on the system (bringing up power to too many servers at once) resulted in excessive voltage on the power rails.
In my understanding of physics, increasing the current usually results in reduced voltage. So where did the over-voltage come from?
Or are they saying their their UPS generators were somehow incapable of limiting their output voltage? Pretty strange generators, not suitable for the task?
None of this sounds right, which is why I reject
Sounds like an IT probelm to me. (Score:5, Interesting)
I worked as a dev for a pretty big social network company. We were a not-quite also-ran, peaking at Alexa 108 globally, and for a while we were beating the pants off of Facebook. This was in the pre-AWS days when startups still ran their own servers. Early on, we had apparent power failures on two successive Saturday nights. Right when our database scrubbing processes started.
I suggested to our sysadmins that *maybe* it was because all of the disk heads were starting to move at once, and *maybe* it would go away if we staggered the processes across servers.
Yep, problem solved. Our power feeds were rated for average power draw, not peak power draw on all servers in a rack, and peak power came when all of the disks started seeking simultaneously.
It seems the same thing happened at BA, except no one thought to stagger-start the servers. For us, this was the first big system we ever built, so, OK, chalk it up to growing pains (and the problem never, ever happened again). But BA? Shame on them.
Last year... (Score:2)
Comment removed (Score:3)
Re: (Score:3)
A report I read suggested they had around 500 cabinets of machines (not sure if this was across both sites or the primary). Estimating 2KW/cabinet brings you into MW territory for the lot, so this is a non-trivial amount of machinery to keep running in a power failure situation. The failure description suggested that it was a surge issue, so it's not clear if this was just stupids on their behalf (not staggering restart) or something else going wrong within the site (bad failure to generators etc).
Either wa
Re: (Score:3)
However, their DR planning did get them running again within a few days which is more than most companies can manage.
Most companies wouldn't manage to recover in a few days from an actual disaster. However, all that seems to have happened is that they fried a few servers. Doesn't take a lot of planning to get some spares in and recover some toasted machines. Not knocking the guys on the ground, who probably had to work quite hard to do it but trying to fixup the primary site because the failover was dysfunctional is no evidence at all for a good DR plan.
Also, we don't know where the surge came from, or how it was able to
Yes, because small scale deployments always map up (Score:3)
"I was able to protect my puddly shit at my workplace with equipment I bought at Frys, so BA should have been able to protect its 12,000 servers just like I did."
Scaling up is hard. Just because you were able to do it with your install doesn't mean it would be just as easy for a larger install.
That said, they should have done a better job at BA. Even though testing power isn't part of a smaller DC's MO, it should be for a company the size of BA...at least in their dev environment.
amazing how airlines are all having issues (Score:2)
Here's a shovel (Score:2)
I don't think he's digging his hole fast enough. Feel free to borrow my shovel.
Or, perhaps a better solution would be for someone else at BA to clonk him over the head from behind with a little statuette or something so he just stops talking.
Uncontrolled RETURN of power? (Score:2)
First, there is a cut in mains power to the data centre. No biggie, the batteries take the load. The backup generators then start to spin up and then supply power and the datacentre keeps running.
No lights went off, no computers crashed, business kept running.
But then, mains power is available again. How do you transition from your own generated power back to grid power? You can't just flick a switch. For a start you should ensure that the phase of the two power sou
Re: (Score:2)
After people hitting the big red button (or the fire alarm doing it automatically), this is the most common failure mode for a data center.
BA utility providers have already call BS on this (Score:2, Informative)
The utility providers for all of BA's major operations centers in England are all on record [ft.com] as saying there were no power surges, anomalies, etc. This wasn't "we're unaware of...", they all went back over their logs and categorically denied it (seems like they weren't happy about BA trying to pin any bit of this sh*t show on them). As many have pointed out above and elsewhere, none of this passes the sniff test. BA's taking a beating for this, not just over stranding passengers but how they handled the s
IT servers??? (Score:2)
As opposed to another type of servers? Do they have building & grounds servers? Operations servers? Receptionist servers?
Just curious...
Awaiting lawsuit for clarification (Score:2)
There is no spoon (Score:2)
And there was no power surge (not outside the DCs anyway).
A large number of ex-BA IT staff have commented in fora about the historic robustness of the system, however over the last 5 years BA has systematically gutted its IT staff and outsourced just about everything to India.
The CIO of BA (and IAG) is a manager whose last claim to fame was being the person responsible for ramming through the highly contentious (as in strike-causing) cabin crew contracts which stripped out many rights in 2011.
He has ZERO IT
Re:Don't UPSes also act as surge protectors? (Score:5, Funny)
How big a current spike was this?
1.21 Jiggawatts, and it sent them back to 1985.
Re: (Score:3)
Great Scott!!
Re: (Score:2, Funny)
85 mph won't cut it. Gotta get that baby up to 88!
Re: (Score:3)
While many industrial UPS systems are dual conversion systems (essentially, the critical load is powered from the battery bus/inverter, and fails over to mains in the event of an inverter/battery malfunction), they are sometimes operated in standby mode (the critical load is powered from mains, and fails over to the battery bus/inverter in the event of a mains failure) as this saves energy due to
Re: (Score:3)
Re:Don't UPSes also act as surge protectors? (Score:5, Interesting)
They do, but some surge protection devices have a limited number of surges they can absorb before they have to be replaced. If there were a number of surges, it's certainly feasible for the protection chain to fail at some point.
An anecdote from a few weeks ago with a data center I help manage. It has a backup generator, automatic switch gear and a Schneider Electric Galaxy double conversion UPS. Yes we don't have two, but we ain't an airline. We do have another data center on another site to take over if needed though.
So a few weeks back our phones go wild with texts fired off by the UPS tossing SNMP traps around. One sprint later, the UPS console is showing no input power and our in-house electricians lay rubber from one end of the campus to the other to get to the sub in time. As we wait for the UPS to hit that magic 5 minutes when it triggers the auto-shutdown sequences on the servers, the sparkies discover the sub's output is fine and the generator isn't running.
Then all shit breaks loose, ten power cycles on the UPS input, some lasting long enough to switch from battery to mains, some not. With ten minutes left on the batteries, the UPS gives up, shuts the inverter and charger down and switches the load to static bypass. Room goes silent except for the UPS alarms, and then the eleventh return cycle comes and goes in about three seconds. We hear PSU fans starting and then winding down. I dropped the master breaker on the DB and isolated the room from the UPS. Down until the sparkies figure it out. There goes three hours of our lives.
Turns out that the automatic switch gear had some arc damage on the utility-side contactor feeding the control boards, probably caused by the eight months of load-shedding (read utility driven power cuts to ration power) we had experienced two years ago. That was enough to drop the voltage in one sensor to below the trigger threshold and caused that contactor and the main load contractor to open. Before it could start the generator up, the control board then decided the utility had returned, so it closed the contractors again. And open again, and close again. The sound of a 3-phase 480V 500A contactor switching twice a second is enough to make the sparkies use words a sailor would be proud of.
We had to lock out the sensors, rig a temporary bypass on the contactors to power the room from the generator feed side and replace the damaged contactors before we were fully safe again. We lost 2 PSUs out of 90 and no data. We were lucky.
I relate this to show that no matter how good the power protection architecture is, multiple UPSes, twin feeds etc, shit can and does happen. We were lucky we had people on the site who knew what trouble sounds like and were willing to isolate the room.
So I'm willing to accept that BA lost a data center to power problems. But I'm not willing to accept that the loss of a single data center can shut down global operations. BA must have multiple redundant data centers with a seamless failover mechanism. And that is a failure of IT pure and simple.
Re: (Score:3, Insightful)
"We were lucky we had people on the site who knew what trouble sounds like and were willing to isolate the room"
You weren't lucky, it's called having good, well-trained/practised staff on-site. And based on what everyone has been saying this is something that was severely lacking at BA
Re: (Score:2)
While that's how you would do it reality of what BA did seems to be less sensible.
Re: (Score:2)
AND test them every month.
Not just to make sure the hardware works, but to make sure EVERYONE involved knows what a failure of electrical failover looks like, but that the dual-redundancy also works, and everyone knows where all the controls are, and what they do (hardware and software).
It may be expensive relative to your pay grade, but in the greater scheme of the world's largest airlines, its totally piddling. Have you
some DC's have there own sub station's and it may (Score:2)
some DC's have there own sub station's and it may of been some thing in the side the DC's power system that failed.