Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Handhelds Communications Bug Software Hardware

RIM Releases Reason for Blackberry Outage 106

An anonymous reader writes "According to BBC News, RIM has announced that the cause of this week's network failure for the Blackberry wireless e-mail device was an insufficiently tested software upgrade. Blackberry said in a statement that the failure was trigged by 'the introduction of a new, non-critical system routine' designed to increase the system's e-mail holding space. The network disruption comes as RIM faces a formal probe by the US financial watchdog, the Securities and Exchange Commission, over its stock options."
This discussion has been archived. No new comments can be posted.

RIM Releases Reason for Blackberry Outage

Comments Filter:
  • by Mr Pippin ( 659094 ) on Friday April 20, 2007 @11:31AM (#18812203)
    More importantly, they apparently had no or a very bad backout plan.

    It's quite likely the development group listed this as a risk, with a good backout plan, and upper management simply didn't want to pay for the cost of having a quick backout.

    If that's the case, you can be pretty sure upper management WON'T take the blame.
  • by spells ( 203251 ) on Friday April 20, 2007 @11:38AM (#18812311)
    You can tell this is a geek site. Bad software rollout, first post wants to blame the QA manager, second wants to blame "Upper Management." How about a little blame for the devs?
  • by lucabrasi999 ( 585141 ) on Friday April 20, 2007 @11:49AM (#18812433) Journal
    How about a little blame for the devs?

    Blasphemer!

  • by bradkittenbrink ( 608877 ) on Friday April 20, 2007 @11:52AM (#18812471) Homepage Journal
    Clearly bugs originate with devs, the same way typos and spelling errors originate with authors. The occurrence of such errors is inevitable. The process as a whole is what is responsible for eliminating them. To the extent that the devs failed to contribute to that process then yes, they also deserve blame.
  • by roman_mir ( 125474 ) on Friday April 20, 2007 @11:53AM (#18812499) Homepage Journal
    I am not sure if you are trying to be funny or insightful, probably you are aiming for a bit of both, however, while bugs in software (inevitably) are developers' fault, release of software with bugs into production system is always management fault. There must be a process in place to catch bugs before release for mission critical systems (isn't it one of them?) There must be a process in place for a quick rollback for such systems. There must be some form of backup. How about running both, new and old systems in parallel for a while with ability to switch to the old if the new one fails?

    Whatever it is, the production problems are due to bad process, which is what management is supposed to control. They are not even responsible for coming up with the technicalities of the process, they are responsible for making sure that there is a sufficient process (sufficient in terms that it is agreed by all parties, DEVs, QAs, BAs, client that it is good enough.) They are responsible to make sure that the process is followed.

    Over a year ago now in Toronto, ON, Canada, the Royal Bank of Canada had a similar problem, but of course with a bank it is much more dangerous it is lots of money of lots of people. Heads rolled at the management level only.
  • by TheBishop613 ( 454798 ) on Friday April 20, 2007 @11:53AM (#18812509)
    Am I the only one who thinks they actually survived this pretty well? I mean sure, the goal is to try to make sure that the system never goes down and is up 24/7, but sometimes shit happens in large systems. It seems to me that getting everything back to normal within 12 hours is pretty reasonable. Did they have an instant fix? Well no, of course not, but they got the system back to a working state relatively quickly and hopefully didn't lose data.


    Yeah, they've got areas to tighten up their QA and patch processes, but on the whole they got it all back up and running faster than most enterprises get their email functioning after a worm.

  • Pop quiz! (Score:3, Insightful)

    by 8127972 ( 73495 ) on Friday April 20, 2007 @11:55AM (#18812523)
    Which is worse:

    A) The fact one piece of software took down their environment.
    B) Their failover plan didn't work.
    C) All of the above.
    D) None of the above.

    Personally, I vote for "B". Face it, s**h happens. But when you plan for s**t happening and the plan doesn't work, that's a VERY bad thing.
  • by Fritz T. Coyote ( 1087965 ) on Friday April 20, 2007 @12:05PM (#18812659) Homepage
    I love the (Friday) morning quarterbacks who will now proceed to beat up RIM for a system outage after a 'non critical' upgrade.

    And a bunch of suits will want the heads of the technicians responsible.

    I feel for them, I really do.

    A few years ago I put in a minor maintenance change that made headlines for my employer.

    This is a natural result of the budgetary constraints we have to live with in the real world. Testing and certification is expensive, and the more complex the environment, the more expensive it gets. It is difficult to justify a full blown certification test for minor, routine maintenance, unless you are talking about health and safety systems. So a worst-case event occurred, RIM suffers some corporate embarrassment, some low-level techs will get yelled at, and possibly fired, and a bunch of people had to suffer crackberry withdrawal.

    Nobody died. No planes crashed. No reactors melted down.

    RIM will work up some new and improved testing standards, and tighten the screws on system maintenance so much that productivity will suffer, they may even spend a bunch of money on the equipment needed to do full-production-parallel certification testing. And then in a year or so cut the budget to upgrade the certification environment as 'needless expense', and come up with work-arounds to reduce the time it takes to get trivial changes and bugfixes rolled out.

    I wish them luck. Especially to the poor sods who did the implementation.

    At least when I did my 'headline-making-minor-maintenance' it only made the local papers for a couple of days.

  • by jimicus ( 737525 ) on Friday April 20, 2007 @12:08PM (#18812713)
    How about a little blame for the devs?

    Because that's not how change should happen in large/business critical applications.

    What should happen is that the update is thoroughly tested, a change control request is raised and at the next change control meeting the change request is discussed.

    The change request should include at the very least a benefit analysis (what's the benefit in making this change), risk analysis (what could happen if it goes wrong) and a rollback plan (what we do if it goes wrong). None of these should necessarily be vastly complicated - but if the risk analysis is "our entire network falls apart horribly" and the rollback plan is "er... we haven't got one. Suppose we'll have to go back to backups. We have tested those, haven't we?" then the change request should be denied.

    As much as anything else, this process forces the person who's going to be making the change to think about what they're going to be doing in a clear way and make sure they've got a plan B. It also serves as a means to notify the management that a change is going to be taking place, and that a risk is attached to it.

    And if a change is made but hasn't been approved through that process, then it's a disciplinary issue.

    Of course, it's entirely possible that such a process was in place and someone did put a change through without approval. In which case, I don't envy their next job interview.... "Why did you leave your last job?"
  • by bcat24 ( 914105 ) on Friday April 20, 2007 @12:11PM (#18812743) Homepage Journal
    I couldn't agree more. Yes, the developers should be responsible for their errors, but still, they're only human. Even the best dev makes a serious mistake from time to time. That's why it's essential to have good coders and good QA folks and good management for any project, especially one as large as the Blackberry network. Sometimes redundancy is a good thing.
  • by WoTG ( 610710 ) on Friday April 20, 2007 @12:43PM (#18813153) Homepage Journal
    RIM is not a regular company. They have specifically created a centralized system where the email for millions of people depend on the uptime of their two (?!?!) data centres. Delivering email is literally their business and uptime is a critical part of that. IMHO, a half an hour of system wide downtime is pushing RIM's luck.

    Several hours of email downtime is "OKish" if you are talking about a medium sized company that only has a handful of servers and a few IT guys. This is not the same at all.

    Prior to this, I never realized that the RIM system was THIS centralized. It's kind of concerning really. And I don't quite understand why so many US gov't users are allowed to route their email through a NOC in Canada (disclosure: I'm Canadian).

  • by soft_guy ( 534437 ) * on Friday April 20, 2007 @12:45PM (#18813193)
    I am a dev and my motto is "all software engineers are liars and idiots" and I include myself in this. If you want to know how something is supposed to work in theory, ask the dev. If you want to know the actual behavior, ask QA.
  • by slashbob22 ( 918040 ) on Friday April 20, 2007 @12:55PM (#18813317)

    Nobody died. No planes crashed. No reactors melted down.
    You are safe on the planes crashing and on the meltdowns. I didn't hear of any such incidents.

    However, I will argue that the outage may have contributed to deaths. There are many hospitals which use Blackberries instead of pagers (2-way comms), so paging a surgeon or doctor or other staff to an emergency may not work well. I am sure there are other examples of critical applications (which should or should not use blackberries) that may have been effected. The obvious thing is that I cannot provide stats, because they certainly aren't available - but saying that nobody died would be a gross overstatement.

    On a lighter side, other casualties may have been caused from crackberry withdrawal: people walking into walls because they aren't used to walking without reading their blackberry, people jumping out of buildings because they cant get their latest stock quote, etc..
  • by Ralph Spoilsport ( 673134 ) on Friday April 20, 2007 @01:25PM (#18813771) Journal
    If the product had been properly tested (and face it - outside of medical and military applications, how much of ANYTHING is properly tested?) they'd have found, reported, and fixed the bug weeks earlier.

    You can't expect programmers to do perfect work, even with unit testing and all the other basic amenities of software development. It requires QA, and that is something sorely lacking in contemprary software product. From the smallest OSX widget to MS Vista,Testing Matters.

    RS

  • by SABME ( 524360 ) on Friday April 20, 2007 @01:59PM (#18814267)
    As a QA guy, I can't tell you how many times I've been told, on a Monday, "Do whatever is required to make sure this software is stable, as long as you release it on Friday."

    We're lucky we can get through a single pass of functionality testing; forget about load/stress/performance/long-term stability. We're lucky we have a test environment composed of hardware retired from production, because it was deemed insufficient to meet the needs of the production environment.

    True story: I was supposed to be testing a product that interfaced with an IP videoconferencing bridge. Except we had no such bridge in our environment, and no budget to purchase one. No one in management thought this was absurd until I took a cardboard box and wrote "Video Bridge" on it, along with little holes labeled eth0, eth1, DS1, etc. (much like the famous P-p-p-powerbook). I complained to the VP of Engineering that our tests were blocked because I couldn't get the video bridge to come up on our lab network. When I showed him the "box," he got the point. :-).

    In my experience, customers are more interested in getting new features ASAP than they are in reliability, which is why so many organizations put a premium on rolling out new features quickly. When was the last time anyone worked on a release with no new features outside of performance and stability improvements?
  • by mutterc ( 828335 ) on Friday April 20, 2007 @02:12PM (#18814499)

    How many people here have checked in buggy code that neither management nor QA knew was buggy? (crickets)

    How many people here have been on projects where management shoved the code out the door despite major bugs that they knew about? (thunderous applause)

    How many people here have tried to get time on The Schedule to do something The Right Way, only to be told by management to do it half-assed, because that's all there's time/resources for? (applause, hooting)

    There you go.

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...