Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Cloud Businesses Data Storage Microsoft IT

Certificate Expiry Leads to Total Outage For Microsoft Azure Secured Storage 176

rtfa-troll writes "There has been a worldwide (all locations) total outage of storage in Microsoft's Azure cloud. Apparently, 'Microsoft unwittingly let an online security certificate expire Friday, triggering a worldwide outage in an online service that stores data for a wide range of business customers,' according to the San Francisco Chronicle (also Yahoo and the Register). Perhaps too much time has been spent sucking up to storage vendors and not enough looking after the customers? This comes directly after a week-long outage of one of Microsoft's SQL server components in Azure. This is not the first time that we have discussed major outages on Azure and probably won't be the last. It's certainly also not the first time we have discussed Microsoft cloud systems making users' data unavailable."
This discussion has been archived. No new comments can be posted.

Certificate Expiry Leads to Total Outage For Microsoft Azure Secured Storage

Comments Filter:
  • Expirty? (Score:1, Insightful)

    by Anonymous Coward on Saturday February 23, 2013 @10:21AM (#42988917)

    Timothy!! It's your fucking JOB!

  • Tip of the iceberg (Score:5, Insightful)

    by gmuslera ( 3436 ) on Saturday February 23, 2013 @10:38AM (#42988997) Homepage Journal
    If you can't trust Microsoft for such kind of small but essential things, should you trust them with bigger ones?
  • by Anonymous Coward on Saturday February 23, 2013 @10:49AM (#42989041)

    How does Timothy fuck up so many words?

    Occam's Razor applies here. The simplest explanation is: because he's an incompetent, stupid cunt who can't do basic things correctly.

  • Re:Typical. (Score:5, Insightful)

    by rtb61 ( 674572 ) on Saturday February 23, 2013 @11:20AM (#42989179) Homepage

    M$ has a history of lack of customer focus hence it will fail ay any industry that demand the highest levels of customer focus. For cloud services to be down for a down is inexcusable and seriously any IT management staff that fails to acknowledge these failures and uses or recommends Azure should be fired. Any down time should be measured in minutes not days, this should be considered catastrophic failure. M$ is far to used to it's EULA's a warranty without a warranty and has become woefully complacent about actually guaranteeing a supply of service, meh, it mostly works it their motto and we'll fix it net time round, for sure this time.

  • Re:Somebody (Score:5, Insightful)

    by Glendale2x ( 210533 ) <slashdot@ninjam o n k ey.us> on Saturday February 23, 2013 @11:23AM (#42989225) Homepage

    Eh, don't put anything too important that you can't live without on systems outside of your control.

  • by click2005 ( 921437 ) * on Saturday February 23, 2013 @11:34AM (#42989261)

    The Blue Sky of Cloud Death

  • by johnlcallaway ( 165670 ) on Saturday February 23, 2013 @11:41AM (#42989297)
    ... this is what you get. Sure, it's possible the same thing can happen for any company. But at least then you can fire your incompetent staff.

    Once you deploy to a vendor, you are stuck. From what I've seen, you can't easily move data and code from one vendor to another. One of our clients is in the UK Azure cloud and we have to BCP about 6M rows from their server to our system every week. Takes over 90 minutes, and constantly fails because of losing the connection. We've looked at deploying systems to various clouds, and the costs were not worth it.

    I will NEVER put any critical business system in someone else's cloud. At worst, I might put it in someone's data center on *MY* servers. The cloud seems to be fine for small business startups and non-important data for personal use. Businesses who no one would even notice if their site was down for a day.

    BTW .. 'Cloud' computing is just remote virtual servers over the Internet. It's really not something new and original. People act like it's some amazing new 'thing'. Well .. it's not. It's just another way of letting companies with limited or no tech skills put up a web site or store data. It's expensive, proprietary, and I doubt very cost effective in the long run.
  • Re:Somebody (Score:1, Insightful)

    by Anonymous Coward on Saturday February 23, 2013 @11:44AM (#42989313)
    Because he's an expirt.
  • Re:Somebody (Score:4, Insightful)

    by Anonymous Coward on Saturday February 23, 2013 @11:53AM (#42989377)

    Somehow I feel those worker visas are the issue here.

    Anything else you'd like to blame on foreigners?

    Declining population of ducks in the local pond?
    Chips no-longer served in old newspaper?
    Lack of respect for elders?
    Banning of blackboards in schools?
    Rampant rape and violence all foreigners bring to your little Daily Mail reading village?

  • Monitoring Fail (Score:4, Insightful)

    by HTMLSpinnr ( 531389 ) on Saturday February 23, 2013 @11:58AM (#42989405) Homepage
    I find it hard to believe anyone who maintains such a large fleet of services wouldn't have setup some sort of trivial monitoring (I know they own a product or two) that would include SSL Certificate expiration warning. 30+ days out, a ticket (or some sort of actionable tracking mechanism) should have been generated, alerting those responsible to start taking action. Said ticket should have become progressively higher severity as the expiration date loomed (meaning nothing had been updated), which in any sane company, would have implied higher and higher visibility.

    That way, if an extensive test plan for such a simple operation was required, they had plenty of time to execute upon it and still not miss the boat.

    Working with MS in other ways, and combined with both the lack of foresight and inability to act quickly, just shows that this sort of customer-forward thinking just doesn't exist inside the MS mind.
  • by Junta ( 36770 ) on Saturday February 23, 2013 @12:10PM (#42989499)

    The reality is, if you outsource your hosting to a single company, there will always be single points of failure.

    There will be architectural ones, like root of trust expiring resulting in security framework taking everything down.

    There will be bugs that can bite all of their instances in the same way at the same time.

    There will be business realities like failing to pay electric bills, or collapsing, or simply closing down their hosting business for the sake of other business interests.

    Ideally:
    -You must keep ownership of all data required to set up anywhere at all time. Even if you host nothing publicly yourself, you must assure all your data exists on storage that you own.
    -You either do not outsource your hosting (in which case your single point of failure business wise would take you out anyway) or else you outsource to financially independent companies. "Everything to EC2" is a huge mistake, just as much as "everything to azure" is a huge mistake.
    -Never trust a providers security promises beyond what they explicitly accept liability for. If you consider the potential risk to be "priceless", then you cannot host it. If you do know what your exposure is (e.g. you could be sued for 20 million, then only host it if the provider will assume liability to the tune of 20 million)

  • Re:Somebody (Score:5, Insightful)

    by DarkOx ( 621550 ) on Saturday February 23, 2013 @12:21PM (#42989563) Journal

    Right and I think this is an important aspect to the problem here.

    There is simply no substitute for having all your I's dotted and T's cross with large integrated systems like this. This is a culture problem not a individual screwed up problem. If you just fire the guy, there will be lots of awareness but the take away most of your remaining people will get is "don't forget to check the certificate expiry dates, that'll get you canned" many of them traumatized by the experience will dutifully check certificate dates for the rest of their careers but this will do nothing to prevent your next major outage; because that will almost certainly be the result of something else.

    Everyone is pushing this vitalization + "dev ops" + management/monitoring is going to let us have one admin do what was once the work of ten. The fact is it just does not work like that. Management/monitoring like Microsoft Mom for example requires you to have all the failure modes identified and the scripts written to check conditions like expiry dates and trigger the alerts. Unless everyone is really good about all the routine maintenance tasks in there is won't help with something like this. That takes time you ONE admin has not got and discipline that breaks down when someone is overworked.

    The "dev ops" and vitalization stuff is all great in terms of how much can be automated. Someone has to develop that automation though. Your ONE guy does not have time to build and test his generic deployment scrip when you promised your customers you'd have their infrastructure stood up last week.

    It comes down to the business recognizing its important to have good people, enough people, and willingness to invest in making sure the job is done correctly and completely every time, and that documentation is maintained and in a way everyone knows how to use it. Check lists need to be kept and followed etc. IT got away from plant engineering style discipline when hardware got cheap. You know longer had to worry about that one computer you had failing. As we move back to more consolidated and integrated solutions; management is going to have to get used to the idea again that there is some people time investment that must be made. Its great you can save on power, cooling equipment, and headcount but you can't cut headcount to far because the more consolidated you get the less you can afford for anything to go wrong so it all must be check, doubled checked, and checked again just to be sure. This is if you do it yourself or if you pay your cloud provider to do it. Either way cloud services so far have been mostly a race to the bottom and that is going to cause some to have to learn some very painful lessons if the industry remains on its current trajectory.

  • by RazorSharp ( 1418697 ) on Saturday February 23, 2013 @02:24PM (#42990355)

    Azure - bright blue in color, like a cloudless sky

  • Re:Somebody (Score:4, Insightful)

    by RazorSharp ( 1418697 ) on Saturday February 23, 2013 @02:39PM (#42990439)

    If you want to defend H1B1 workers and dirt-cheap Indian code monkeys, perhaps you should make a logical argument.

    I don't think the guy you're responding to had the most well thought out argument but your response did nothing to refute it. You accuse him of xenophobia when it's obvious that he wasn't talking about foreigners in general, he was talking about specific foreigner workers that are hired by American firms that are looking to cut costs. That doesn't mean that all foreigners are incompetent -- the assumption is that the most competent foreigners don't have to accept lower than deserved wages to undercut American workers. There's a reason the foreigners who undercut American jobs are willing to accept less money -- they're not worth as much.

    Shame on the four mods who upvoted your post.

  • Re:Monitoring Fail (Score:5, Insightful)

    by rabbitfood ( 586031 ) on Saturday February 23, 2013 @03:28PM (#42990763)

    Simple operation? You've clearly never worked for a large company.

    Even if a warning wasn't trickled down a month ago, and we've no reason to assume it wasn't, the person whose job it is to act on it, provided they weren't on vacation, won't have simply thrown five dollars at a registrar. They'll have had to put in a request to the finance department, probably via a cost-management chain of command, with a full description of what needed to be paid to whom and why, with payee reference, cost-center code, expense code and departmental authorization, and hoped it would arrive in time to be allocated to the next monthly rubber-stamp meeting. Assuming the application contained no errors, was suitably endorsed and was made against an allocated budget that hadn't been over-spent and wasn't under review, then, perhaps, in the fullness of time, it might have received approval and have been sent back down the chain for subsequent escalation to the bought-ledger department, who'd have looked at the due date, added ninety days and put it on the bottom of the pile. After those ninety days, when the finance folk began to take a view to assessing its urgency, unless they found a proper purchase order from the supplier, and a full set of signed terms and conditions of purchase, non-disclosure agreements, sustainability declarations and ethical supply-chain statements, as now required by any self-respecting outfit, it'll have been put aside and, eventually, sent back round to be done properly. Or, if it all checked out first time, it'll have been put on the system for calendering into the next round of payment processing.

    I'm sure it might be possible to streamline aspects of such mechanisms, but to suggest there's anything trivial about them is a touch hasty. But you never know. Perhaps they're already thinking of planning a meeting to discuss it, and are working on a framework for identifying the stakeholders as I write.

BLISS is ignorance.

Working...