RIM Releases Reason for Blackberry Outage 106
An anonymous reader writes "According to BBC News, RIM has announced that the cause of this week's network failure for the Blackberry wireless e-mail device was an insufficiently tested software upgrade. Blackberry said in a statement that the failure was trigged by 'the introduction of a new, non-critical system routine' designed to increase the system's e-mail holding space. The network disruption comes as RIM faces a formal probe by the US financial watchdog, the Securities and Exchange Commission, over its stock options."
Re:I'd hate to be their QA manager right now! (Score:5, Insightful)
It's quite likely the development group listed this as a risk, with a good backout plan, and upper management simply didn't want to pay for the cost of having a quick backout.
If that's the case, you can be pretty sure upper management WON'T take the blame.
Re:I'd hate to be their QA manager right now! (Score:5, Insightful)
Re:I'd hate to be their QA manager right now! (Score:5, Insightful)
Blasphemer!
Re:I'd hate to be their QA manager right now! (Score:5, Insightful)
Re:I'd hate to be their QA manager right now! (Score:5, Insightful)
Whatever it is, the production problems are due to bad process, which is what management is supposed to control. They are not even responsible for coming up with the technicalities of the process, they are responsible for making sure that there is a sufficient process (sufficient in terms that it is agreed by all parties, DEVs, QAs, BAs, client that it is good enough.) They are responsible to make sure that the process is followed.
Over a year ago now in Toronto, ON, Canada, the Royal Bank of Canada had a similar problem, but of course with a bank it is much more dangerous it is lots of money of lots of people. Heads rolled at the management level only.
Is this really so bad? (Score:4, Insightful)
Yeah, they've got areas to tighten up their QA and patch processes, but on the whole they got it all back up and running faster than most enterprises get their email functioning after a worm.
Pop quiz! (Score:3, Insightful)
A) The fact one piece of software took down their environment.
B) Their failover plan didn't work.
C) All of the above.
D) None of the above.
Personally, I vote for "B". Face it, s**h happens. But when you plan for s**t happening and the plan doesn't work, that's a VERY bad thing.
Testing of Complex Systems (Score:4, Insightful)
And a bunch of suits will want the heads of the technicians responsible.
I feel for them, I really do.
A few years ago I put in a minor maintenance change that made headlines for my employer.
This is a natural result of the budgetary constraints we have to live with in the real world. Testing and certification is expensive, and the more complex the environment, the more expensive it gets. It is difficult to justify a full blown certification test for minor, routine maintenance, unless you are talking about health and safety systems. So a worst-case event occurred, RIM suffers some corporate embarrassment, some low-level techs will get yelled at, and possibly fired, and a bunch of people had to suffer crackberry withdrawal.
Nobody died. No planes crashed. No reactors melted down.
RIM will work up some new and improved testing standards, and tighten the screws on system maintenance so much that productivity will suffer, they may even spend a bunch of money on the equipment needed to do full-production-parallel certification testing. And then in a year or so cut the budget to upgrade the certification environment as 'needless expense', and come up with work-arounds to reduce the time it takes to get trivial changes and bugfixes rolled out.
I wish them luck. Especially to the poor sods who did the implementation.
At least when I did my 'headline-making-minor-maintenance' it only made the local papers for a couple of days.
Re:I'd hate to be their QA manager right now! (Score:5, Insightful)
Because that's not how change should happen in large/business critical applications.
What should happen is that the update is thoroughly tested, a change control request is raised and at the next change control meeting the change request is discussed.
The change request should include at the very least a benefit analysis (what's the benefit in making this change), risk analysis (what could happen if it goes wrong) and a rollback plan (what we do if it goes wrong). None of these should necessarily be vastly complicated - but if the risk analysis is "our entire network falls apart horribly" and the rollback plan is "er... we haven't got one. Suppose we'll have to go back to backups. We have tested those, haven't we?" then the change request should be denied.
As much as anything else, this process forces the person who's going to be making the change to think about what they're going to be doing in a clear way and make sure they've got a plan B. It also serves as a means to notify the management that a change is going to be taking place, and that a risk is attached to it.
And if a change is made but hasn't been approved through that process, then it's a disciplinary issue.
Of course, it's entirely possible that such a process was in place and someone did put a change through without approval. In which case, I don't envy their next job interview.... "Why did you leave your last job?"
Re:I'd hate to be their QA manager right now! (Score:4, Insightful)
Yes it is. They've put themselves in a critical... (Score:5, Insightful)
Several hours of email downtime is "OKish" if you are talking about a medium sized company that only has a handful of servers and a few IT guys. This is not the same at all.
Prior to this, I never realized that the RIM system was THIS centralized. It's kind of concerning really. And I don't quite understand why so many US gov't users are allowed to route their email through a NOC in Canada (disclosure: I'm Canadian).
Re:I'd hate to be their QA manager right now! (Score:3, Insightful)
Re:Testing of Complex Systems (Score:3, Insightful)
However, I will argue that the outage may have contributed to deaths. There are many hospitals which use Blackberries instead of pagers (2-way comms), so paging a surgeon or doctor or other staff to an emergency may not work well. I am sure there are other examples of critical applications (which should or should not use blackberries) that may have been effected. The obvious thing is that I cannot provide stats, because they certainly aren't available - but saying that nobody died would be a gross overstatement.
On a lighter side, other casualties may have been caused from crackberry withdrawal: people walking into walls because they aren't used to walking without reading their blackberry, people jumping out of buildings because they cant get their latest stock quote, etc..
living proof that QA matters... (Score:3, Insightful)
You can't expect programmers to do perfect work, even with unit testing and all the other basic amenities of software development. It requires QA, and that is something sorely lacking in contemprary software product. From the smallest OSX widget to MS Vista,Testing Matters.
RS
Re:I'd hate to be their QA manager right now! (Score:3, Insightful)
We're lucky we can get through a single pass of functionality testing; forget about load/stress/performance/long-term stability. We're lucky we have a test environment composed of hardware retired from production, because it was deemed insufficient to meet the needs of the production environment.
True story: I was supposed to be testing a product that interfaced with an IP videoconferencing bridge. Except we had no such bridge in our environment, and no budget to purchase one. No one in management thought this was absurd until I took a cardboard box and wrote "Video Bridge" on it, along with little holes labeled eth0, eth1, DS1, etc. (much like the famous P-p-p-powerbook). I complained to the VP of Engineering that our tests were blocked because I couldn't get the video bridge to come up on our lab network. When I showed him the "box," he got the point.
In my experience, customers are more interested in getting new features ASAP than they are in reliability, which is why so many organizations put a premium on rolling out new features quickly. When was the last time anyone worked on a release with no new features outside of performance and stability improvements?
Re:I'd hate to be their QA manager right now! (Score:4, Insightful)
How many people here have checked in buggy code that neither management nor QA knew was buggy? (crickets)
How many people here have been on projects where management shoved the code out the door despite major bugs that they knew about? (thunderous applause)
How many people here have tried to get time on The Schedule to do something The Right Way, only to be told by management to do it half-assed, because that's all there's time/resources for? (applause, hooting)
There you go.