Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage

The Sidekick Failure and Cloud Culpability 246

miller60 writes "There's a vigorous debate among cloud pundits about whether the apparent loss of all Sidekick users' data is a reflection on the trustworthiness of cloud computing or simply another cautionary tale about poor backup practices. InformationWeek calls the incident 'a code red cloud disaster.' But some cloud technologists insist data center failures are not cloud failures. Is this distinction meaningful? Or does the cloud movement bear the burden of fuzzy definitions in assessing its shortcomings as well as its promise?"
This discussion has been archived. No new comments can be posted.

The Sidekick Failure and Cloud Culpability

Comments Filter:
  • Management (Score:4, Interesting)

    by FredFredrickson ( 1177871 ) * on Monday October 12, 2009 @09:31AM (#29718647) Homepage Journal
    It's usually a decision on management's side to not use best practices, despite warnings from the tech dept.

    tldr; There's nothing wrong with the technology, just the greedy bastards using it.
    • Re:Management (Score:5, Insightful)

      by sopssa ( 1498795 ) * <sopssa@email.com> on Monday October 12, 2009 @09:32AM (#29718659) Journal

      As always, cloud computing/hosting/whatever is a vague term used like any other buzz term. I just see it as a platform where the resources should be allocated automatically and the underneath system takes care of having those available.

      The same failure points are there. You're just putting the trust and management to someone else. Even if they do have backup plans and certain levels of redundancy, it can always fail. Cloud computing isn't something magical.

      “Similarly datacenters fail, get disconnected, overheat, flood, burn to the ground and so on, but these events should not cause any more than a minor interruption for end users. Otherwise how are they different from ‘legacy’ web applications?”

      That's because they aren't. The system is just managed by someone else, and its managed for thousands of people at the same time so its cheaper. Kind of like what Akamai has been doing for long with their content delivery network - it's cheaper for the providers because they dont have to build the infrastructure themself, and its cheaper for Akamai because they do it for so many clients.

      • Re:Management (Score:5, Interesting)

        by Splab ( 574204 ) on Monday October 12, 2009 @09:42AM (#29718799)

        Well there is one difference. Cloud computing and virtual servers are to computers what keychains are to keys, it enables you to lose everything at once.

        Yes it is highly convienient and more effective to have everything in one place, but so much more fun when you drop your "chain" in the sewer.

        • Re:Management (Score:5, Insightful)

          by dkf ( 304284 ) <donal.k.fellows@manchester.ac.uk> on Monday October 12, 2009 @10:03AM (#29719057) Homepage

          Well there is one difference. Cloud computing and virtual servers are to computers what keychains are to keys, it enables you to lose everything at once.

          It's not really a difference. With home-grown datacenters you still have that risk unless you do something like building multiple redundant buildings in different locales and managing some kind of replication and backup strategy. But then all of that stuff is the same with going to a Cloud provider, except you're not having to futz around with the physical facilities yourself.

          There's no magic. All we're seeing is stupid people getting burned because they didn't use basic due diligence.

          • Re: (Score:2, Interesting)

            by Anonymous Coward

            Except the problem here is that when a large service goes down in the Cloud, millions of people can be affected.

            For example, what if Google has their way with universities integrating with their system (Docs, Gmail, etc.) and Google has the sort of problems this Sidekick failure does? Now not just one university (if they own data center) has lost all of its hosted data, but any university relying on Google is out all the data hosted on the Cloud.

            • Re:Management (Score:4, Insightful)

              by QuantumRiff ( 120817 ) on Monday October 12, 2009 @10:38AM (#29719479)

              True, but how much more money and brain power does Google have to invest in datacenter design and disaster recovery than your local college?

              Seriously.. I worked at one.. All our stuff was on "next day parts" from Dell.. We had a single internet connection to the campus, single linux based sendmail email server, etc.

              Granted, I had tapes up the wazoo, and could retrieve any file for the past X years, but downtime is still downtime.

              Then you have Google, with multiple sites, multiple connections, replication, Load balancers, etc.

              Not only do they have more to invest, but when they call up a vendor and say "we are Google, we have an outage, and we need some things from you" I bet those vendors jump a little faster than when a local school IT guy calls them up..

              • Re: (Score:2, Insightful)

                by Anonymous Coward

                Although I agree to some extent with your argument, the fact remains that it sure didn't work out in this case!

                Sometimes a company will so thoroughly fall in love with the savings of server consolidation, they fail to implement (and especially TEST) their nifty backup and failover infrastructure. You might think that a company that SELLS cloud-style services would be at the forefront of robust testing. Evidently not in this case. Competent datacenter sysadmins are an endangered species.

                Sidekick users woul

              • Re: (Score:3, Insightful)

                by mcrbids ( 148650 )

                Don't confuse downtime (EG server powered down) with a catastrophic failure like this one. (total, irretrievable data loss)

                Your school was a far better place (apparently) than MS Danger. Although downtime was more likely with your single sendmail server, you would still expect about 99.9% to 99.99% uptime year on year. that equates to about 4 hours per year on average. That's definitely down in the 'minor inconvenience' range for a school.

                And your risk of catastrophic failure with all the (verified?) tapes

            • Re:Management (Score:5, Insightful)

              by BrokenHalo ( 565198 ) on Monday October 12, 2009 @11:05AM (#29719869)
              This all comes back to the thrust of the OP: whether the apparent loss of all Sidekick users' data is a reflection on the trustworthiness of cloud computing or simply another cautionary tale about poor backup practices.

              The simple truth, of course, is that it is both. And the only solution here is the old one: if you want something done properly, you will have to do it yourself. If your data, documents or whatever are in any way important to you, you should not be relying on anyone else to keep them safe. Simple as that, and no excuses.
              • Re: (Score:3, Insightful)

                by Eskarel ( 565631 )

                That's really a rather idiotic statement.

                If your data is important then you take it's storage seriously. Sometimes that means you host it yourself, sometimes it means you get someone else to host it for you. You don't host your critical data if you can't afford the staff and infrastructure to support it, and if you've already got the staff and infrastructure you don't pay someone else to do it.

                The important thing is that you take it seriously. That means contracts with your data storage provider with exactl

          • Re:Management (Score:5, Insightful)

            by wickerprints ( 1094741 ) on Monday October 12, 2009 @10:44AM (#29719577)

            To be fair, Sidekick users didn't have a viable means to back up their personal data that was being pulled from Microsoft/Danger servers. I don't think it's reasonable to expect the users to find some hack or unofficial method to copy all their data from their devices. The only blame they could be assigned is that they bought the service being sold. Your criticism would be valid for, say, iPhone users, since the user has a backup stored on their computer. But no such functionality exists for the Sidekick, as far as I am aware.

            And as to who is really being burned here.... Obviously not Microsoft/Danger. Microsoft doesn't give two shits about this, since their acquisition of Danger in 2008 was really about cannibalizing their talent for Windows Mobile 7, as the Pink project has shown. Danger is just a shell of its former self--the damage was done long before this latest failure, which I think was an inevitable consequence of the acquisition. The ones who got burned are T-Mobile (for trusting Microsoft to manage Danger, and Danger to maintain a proper backup solution), and of course, the consumers.

            The real issue, of course, is that data is always at risk of being lost no matter how, where, or in what amount it is stored. The passage of time guarantees it. But people want to believe in the existence of certainties, in the notion that if something has a 99.9999% reliability, then we can effectively ignore the minuscule probability of failure. But failures happen all the time and there is no such guarantee. We need to rid ourselves of this delusion that data can somehow be made "safe," that risk can be ignored when made small. Cloud computing is just the flavor of the day.

            I knew someone who worked at Danger years ago when the company was still fairly new. It was, at the time, an amazing technology. There was nothing like it. They had so much going for them, and there was a lot of good talent working there. One thing that impressed me was how they solved the problem of mobile web browsing. At the time, mobile web browsing seriously sucked ass. It was not only slow, but many sites simply would not load. Danger solved that by re-parsing the sites on their servers so that pages would look good and function properly on your mobile device. It was the best solution until mobile OSes and hardware became powerful and complex enough to support full browsing; and even then, the UI needed to be tightly integrated before browsing became efficient instead of tedious. It's sad to see such a pioneering company wither on the vine.

            • Re: (Score:3, Insightful)

              To be fair, Sidekick users didn't have a viable means to back up their personal data that was being pulled from Microsoft/Danger servers. I don't think it's reasonable to expect the users to find some hack or unofficial method to copy all their data from their devices.

              Absolutely correct. Wish I had mod points.

          • Re:Management (Score:4, Informative)

            by pz ( 113803 ) on Monday October 12, 2009 @11:36AM (#29720241) Journal

            There's no magic. All we're seeing is stupid people getting burned because they didn't use basic due diligence.

            Yes, and, no. The people getting burned here are customers, by the many thousands. You can't expect the end-user to know what the DRP / BCP is for a subcontractor of the provider of their wireless communicator data plan. I wouldn't call the end-users stupid, and they are the ones most significantly affected in this case.

        • Re: (Score:2, Funny)

          by sexconker ( 1179573 )

          I don't know about you, but I keep all my keys in a safe.

          When I go out, I open up the safe (which has it's own key), and then take only the keys I'll need for my outing.

          I keep them loose, in my various pockets.

          When I return, I return the keys.

          One time I locked the key to the safe inside the safe itself.

          What a day that was.

        • As always, the correct response to this is that it depends.

          Using Akami as an example, they have mirrors in dozens and dozens of locations around the world. So if one or two of Akami's physical data centers goes up in flames, well, there are the other fifty data centers that're still around with your data. In this case, there is certainly redundance in the cloud.

          However, if say, Google kept all of their apps data in one data center, and didn't mirror it, then if that data center gets swallowed by the earth,

      • Re:Management (Score:4, Insightful)

        by dFaust ( 546790 ) on Monday October 12, 2009 @10:03AM (#29719067)
        But if Akamai loses a server, I don't have to repopulate the gigs of data they're hosting for me - it's not lost, it's just no longer on that particular server that died. That's exactly why I consider Akamai to be "the cloud" and why it doesn't side like Danger was. Especially with an infrastructure like Akamai or Google where things are geographically distributed, you just don't hear about servers dying, and you might not even hear about data centers dying (unless it places an unusually high burden somewhere and causes performance issues - but you don't hear about data loss as a result).
      • by joh ( 27088 )

        The thing is that "the Cloud" means absolutely nothing in most cases. In almost every single case there're just remote servers storing your data as with every web app and every IMAP server since ages. The word "cloud" is just used to imply that there's something foggy you don't know anything about and to make you think that it can't fail. But of course in fact the data is stored somewhere and if there's no backup and someone wrecks the server your data is gone for good, as it always is in such cases.

        I've ha

  • If you can't trust your outsourcing partner, replace them or bring the work in-house.

    • Re: (Score:3, Interesting)

      by postbigbang ( 761081 )

      Trust? All that data's gone without much chance of it being recovered, as in bye-bye.

      Do you think that perhaps T-Mobile, or their "trusted partners" might have had a full backup (an IT 101 sort of plan), a mirror or highly available machine (an IT 201 sort of plan), a disaster plan (IT 301), or maybe just an encrypted torrent out there somewhere?

      No.

      Heads oughta roll. Cloud computing is only as good as you make it; it only represents a server outside of your office's NOC or physical boundaries. Nothing else

      • by sjames ( 1099 )

        Or do something sensible like store the data on the phone AND on a server for a best of both worlds solution with added reliability?

        • You would want that to work.

          Maybe use something like rseven.com.

          How many offsetting offsets to the offsets does one need to ensure that simple data is backed up??

          In the real world, too few people backup anything, then cry lots of tears. Here are people that actually went thru the process to get the service. Now their backups are missing-in-action. To the hosts of this service: off with your heads. Not excusable.

    • If you can't trust your outsourcing partner, replace them or bring the work in-house.

      Trust? I don't think the upper management trusts local IT either.

      Really, I don't think it matters who runs the servers or what they call them as long as it is run well.

      Just because its outsourced or inhouse or its gold big iron or cloud computinhg doesn't make it good or bad because either way can be run poorly with the wrong administration.

      Personally I think things should be done in house merely for moral issues bit busin

    • How CAN you trust them? Any big corporation offering those services are after only one thing: profit. And to get it, they WILL cut corners. Doesn't everyone? But in this case you'll have no idea where they cut short, and no idea where you're unsafe, and how much downtime you might have if something goes wrong.

      I'm sorry, but the main issues with Cloud Computing aren't technological, they're issues of trust and reliability of the human, financial and legal factors at work. And when it's you vs Big Corp, you'l

  • by syntap ( 242090 ) on Monday October 12, 2009 @09:39AM (#29718733)

    Didn't that throw up any red flags for ANYONE?

    • Re: (Score:3, Funny)

      by Sockatume ( 732728 )

      I thought the same thing about "Microsoft".

      Okay guys, that joke's done, let's get on with our lives.

      • I never ever heard of Microsoft prior to 1992 (first time I used Windows 3). Prior to that the world revolved around IBM, Apple, and Commodore. Funny how fast things can change, and a small company can leverage itself to the top of the heap.

        • by gnick ( 1211984 )

          You'd never heard of MS before 1992? MS-DOS [wikipedia.org] was a pretty darned popular home OS through most of the 80's. Wikipedia tells me that it was the most popular DOS variant and the most popular personal computer OS at the time (which is what I remember too, but I was only 4 when it came out). You might not have known what MS stood for, but MS-DOS was everywhere. Where were you hiding?

    • by garcia ( 6573 ) on Monday October 12, 2009 @09:47AM (#29718853)

      Didn't that throw up any red flags for ANYONE?

      I was a Sidekick user from 4/2004 until 10/2008. There had been only one 'catastrophic' failure in that time that left Sidekick users without data service for an extended period. Danger produced one of the best mobile devices, which in many ways is still better than anything out there even though the OS and devices that utilize it (the various Sidekick models that exist these days) is quite a bit outdated compared to devices like the iPhone.

      I miss my Sidekick immensely. I loved true multitasking, a fully capable QWERTY keyboard, and incredible battery life. Unfortunately it didn't sync well with calendaring software, didn't keep up with music playing, and is now partially controlled by Microsoft. There have been immense trade offs with moving to the iPhone but based on my main reason for owning an iPhone (I ride the bus and enjoy the music/video player and screen size) it was the right choice for me.

      That said, "cloud computing" is something which usually works (and did, in the case of the Sidekick since 2002). I don't think that this is a proven warning sign that "cloud computing" isn't as reliable as everyone believes, I just think it's proof that companies need to do a much better job of ensuring data integrity than they could have ever imagined before.

      Will I stop using Flickr, Google products, and other future "cloud" devices/software because of this? No. I am smart enough, as a computer savvy end-user, to keep my own backups of my data but I do believe people need to become better educated in what can and will happen as we move to the model we have slowly done in the last 10 years.

  • I know my songs, videos, and other important files are backed-up across triple drives. I don't know if the same is true if I stored them online, and this major failure of Sidekick demonstrates I'm right not to trust them.

    • Also with ISPs like Comcast imposing 250 gig limits, why on earth would I want to offload my information across the net? It makes more sense to *minimize* the data transfer to avoid overage fees, not increase it.

      • While true, I'd say that if you're unhappy with a 250gb cap, do your darndest to at least come close to it each month (though in reality if you're unhappy with it you're likely coming close to or breaking it each month anyways). The only thing that will raise that number (or cause them to upgrade infrastructure allowing them to raise it) is if they see their average monthly consumption getting to close to that number. If their average monthly consumption is 2.5GB (I know people whose monthly bandwidth sta

    • Re: (Score:3, Insightful)

      by slim ( 1652 )

      I know my songs, videos, and other important files are backed-up across triple drives. I don't know if the same is true if I stored them online, and this major failure of Sidekick demonstrates I'm right not to trust them.

      That depends entirely on the online storage service you use. If your contract says the files are backed up across triple drives, then you've a right to expect that they are. If your contract doesn't say that, then you shouldn't expect it. Simple.

      Now, I'd argue that any cloud service worthy of the name ought to have very robust mirrored storage. But since there's no legal definition of the word, you'd better read the contract.

      • by vadim_t ( 324782 )

        Contracts aren't a guarantee. They may sign a contract and not follow the terms. They may go bankrupt. A massive failure may make the total amount of compensation to be paid larger than the money the company has. If you're a small customer, they may be able to screw you over and ignore it, because suing isn't worthwile.

        Even if a service ended up paying me for a data loss there's still information that can't be replaced.

        I can't take another photo of my dead cat.

        A company may not be able to rebuild a database

  • In the end, it doesn't really matter if it's a data center failure or a "cloud" failure. It matters who the user blames. And if you trumpet yourself as "in the cloud", and then that cloud rains on your consumer, whomever is at fault, ultimately it's you, the provider, who has a problem.

  • by iamacat ( 583406 ) on Monday October 12, 2009 @09:45AM (#29718831)

    Just like people lose their stuff on personal hard drives when not backed up, they will lose cloud data when not backed up. Both kinds of computing have merits, and long term persistence of data is not automatic with either. Most people do not place THAT hard a value on backups of their cell phones. They typically sync with a PC anyway. But any business that doesn't have weekly reliable offsite backups of their fundamental assets should be sued by shareholders/customers for irresponsibility weather they use cloud or not.

    • We take these sorts of catastrophic failures more seriously. If my data is on my computer, I know that I'm responsible for my own damn data - if I don't back it up, it's my own fault. But more importantly, when I lose everything, you don't care. If I'm holding millions of peoples' data, you suddenly care a lot more - especially if you're one of the people.

      My point is, with all that data they damn well better be backing up properly, because it's out of my control. If I don't back up properly, I had the optio

    • by alen ( 225700 )

      the hype about "cloud computing" is that there are never any failures and all your data is always going to be safe. at least that's the way the tech rags hype it.

    • Just like people lose their stuff on personal hard drives when not backed up, they will lose cloud data when not backed up. Both kinds of computing have merits, and long term persistence of data is not automatic with either.

      Neither RAID or Cloud Computing is a backup solution. Its merely a way for more uptime and availability of data.

      If by chance the user overwrites or deletes the data on a RAID or an online storage service... Then you've lost your data just the same as if the server crashed.

    • Re: (Score:3, Insightful)

      by John Whitley ( 6067 )

      Heck, I know folks who've lost entire well-known (hobbyist) web-portals some years back due to provider server failures. It was a harsh lesson for those involved. So much for the provider's backup policies. The real solution is to have multiple copies of the data, ideally in different formats. For example, when I was in grad school the University had (for the time) a huge email installation, basically full email hosting for the entire institution. The server and storage spec was excellent -- a big SAN-

  • by dFaust ( 546790 ) on Monday October 12, 2009 @09:50AM (#29718901)

    Personally, I always interpreted cloud computing as software that's running on a number of boxes of which the number can fluctuate without being meaningful (obviously there are performance implications depending on the overall load and number of boxes, but one box going down doesn't inherently bring down the system). One nice thing is these boxes can be geographically distributed as well - so when one data center gets nuked, the others are safe. Now, I realize geographic distribution isn't a requirement but even still, the press release says the data loss is due to a "server failure." Not a data center failure, but the apparent failure of a single server.

    So is this really even "the cloud"? Does that mean that Geocities was "the cloud" or that every web host out there is "the cloud" because they've got my data running on a single machine? I certainly never interpreted it that way, but I'm no expert on the matter. It seems like if this data was in "the cloud" that it could have all been retrieved off of another machine somewhere. Perhaps for some customers those other machines might not yet be completely synced with very recent updates, but that would affect a small amount of data for a subset of customers.

    • Re: (Score:2, Funny)

      by Anonymous Coward
      Cloud computing is where your data goes up in smoke.

      AC for a reason.
    • by rwa2 ( 4391 ) * on Monday October 12, 2009 @10:38AM (#29719477) Homepage Journal

      We'll, I was hoping to just google cloud vs. grid vs. distributed vs. cluster vs. etc. computing, but there doesn't seem to be much official-sounding distinction out there. Which means if we start our own thread here it might become definitive!

      "cloud" computing: fluffy term used by people who really don't know anything other than that they run their applications from a web page and their data appears to be stored on the web because they can access it from more than one web browser.

      "hosted" / "server farm" computing: buying server resources from someone who has a real datacenter who tries to take care of your hardware. You access all of your data over the network "cloud". Redundancy & support varies based on pricing & services.

      "grid" / "utility" computing: computing infrastructure where you should be able to simply scale up CPU, data, etc. resources for your operation simply by throwing money at turning on more boxes. You don't necessarily need to share it with others, though.

      "cluster" computing: a computing system made up of more or less independent, generally homogeneous nodes, where problems can be partitioned out. Generally has some form of redundancy so you don't lose work when a single node dies, but probably won't survive a data center failure.

      "distributed" computing: special applications that can be farmed out to the net to break parts of computing or storage across a heterogeneous network of computers distributed over many locations. Ideally it's written to be highly redundant and tolerate faults such as nodes joining / leaving the cluster.

      As far as reliability goes, the TIA data center tiers seems to be the only common way of talking about maintaining "business continuity". I've read through it briefly, and can somewhat paraphrase the intent (mildly inaccurately, mostly because the standard itself is kinda loose and not defined in too much detail with regards to servers) as:

      Tier 1 "basic" : You have a room for servers with a door to keep random people from tripping over the plugs. Maybe you have a UPS on your server so it can do a graceful shutdown without data loss when the power or AC goes out.

      Tier 2 : You have your stuff in racks with a raised floor for air conditioning and some wire racks hanging from the ceiling for cable management.

      Tier 3 : You have redundant UPS's and RAIDs, CRACs, network links, and stuff, so you can make repairs when common things break without turning off the system (typically anything with moving parts or high currents, like power supplies, fans, disks, batteries needs to be hot-swappable). Which means you should also have some sort of monitoring and alert system so you know when that stuff actually fails so you can replace it before the redundant components also fail. This is intended to reach 24x7 availability with high uptimes... , maybe 3-5 nines.

      Tier 4 : Like Tier 3, but certified for mission-critical / life-critical use, like in hospitals and maybe for airplanes and stuff. It should survive prolonged power outages (so you have a diesel generator with a day or two worth of fuel.)

      Unfortunately, it just covers build specs for individual data centers, so it doesn't really cover other business continuity things like maintaining offsite backups so you can somewhat easily rebuild from scratch if a natural disaster takes out one of your data centers or something. But it's kind of different worlds of IT between designing facilities and architecting "cloud" services, which unfortunately don't seem to communicate or collaborate as much as they should to reach the kinds of "distributed grid of redundant load-sharing data centers" configurations we'd expect.

  • To my mind, this failure just goes to show that what people call clouds are merely the mainframes of yesterdecades... For the cloud to become "THE" cloud, the providers need to cooperate to replicate data across their different implementations, such that when one provider suffers an unforeseen crash of unforeseen magnitudes, the data is til there in the "real" (in this definition) cloud.

    Sure, it would take no small amount of convincing to get the management drones to accept this, but I should think that a c

    • Re: (Score:3, Interesting)

      If someone tells you that they can cheaply prevent catastrophic failure, expect a catastrophic failure. Nothing can correct something like this, which involved an error propagating to the backups.

  • by mangastudent ( 718064 ) on Monday October 12, 2009 @09:52AM (#29718921)

    A single data center apparently without even a geographically distinct failover site is about as far as I can imagine from being a "cloud". Old fashioned best practices in the form of having two or more sites each capable of handling the entire load would have prevented this particular mess, let alone classic cloud approaches like that of the Google File System [wikipedia.org] (GFS) which keeps at least three copies of a file's contents.

    (Granted, if you're storing vital stuff in GFS or Amazon S3 [wikipedia.org] you still have a logical single point of failure (e.g. a mistaken delete command) and therefore you aren't freed from the duty of doing your own backups, but that's a separate issue.)

    Or we could just say that trusting Microsoft for anything is relatively unwise compared to other "higher tier" companies. Or that if you're depending on a service provider that's massively laying off staff you need to take action before something seriously ugly happens, because it likely will.

  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Monday October 12, 2009 @09:52AM (#29718923)
    Comment removed based on user account deletion
  • Assumptions (Score:5, Insightful)

    by eagl ( 86459 ) on Monday October 12, 2009 @09:52AM (#29718925) Journal

    Just because you're paying someone to store your data doesn't mean they care about that data as much as you do... That's one of the two big problems with cloud computing that can't be solved by technology. First, nobody cares about your data as much as you do. Second, nobody will protect your data (ie. control it's distribution and prevent unauthorized changes) to the level you find appropriate.

    It's usually a good idea to avoid using broad generalities (like I just did), but it seems like in general it would be a bad idea to let someone else be the sole keeper of anything even remotely important or sensitive. There are exceptions, but those seem to be internal to a company (ie. the company runs it's own cloud and has all employees use it). Or military/government applications where centralized security and backup can keep user errors from becoming a real danger to the organization beyond "help I lost my email!".

  • by trybywrench ( 584843 ) on Monday October 12, 2009 @09:53AM (#29718933)
    I think the key here is was it only T-Mobile's data that was lost or was every customer of the "cloud" affected. If it was only T-Mobile's data than the issue is T-Mobile's backup policy, if it was "cloud"-wide than it's an issue with the "cloud" provider. In either case, I don't think you can paint the entire "cloud" concept as unstable. Cloud computing is really just a dynamic datacenter with all the usual weak links and issues present in a traditional metal datacenter.
  • by TheLoneGundam ( 615596 ) on Monday October 12, 2009 @09:53AM (#29718939) Journal

    Leaving aside the fact that a "data center" could consist of two servers under Mabel's desk, this is not a "data center" disaster, nor is it a cloud catastrophe.

    This a contract and contract management failure: the contract with the outsource was probably written without specifying that they must do the backups, AND no one established any sort of audit (formal or informal) test to ensure that there _were_ backups being taken and that the outsourcer was performing according to the contract.

    Too often, the MBA doing the contract thinks "there, that's handled" once they've gotten all the signatures on the dotted line. "There, backups are handled now" he thinks, because many business folk (not ALL, I don't think it's fair to generalize that far) see these kinds of things as milestones, rather than ongoing processes to be managed.

    • This is headline trolling. The 'cloud' is just a term used to describe what's already been around for over a decade. This has nothing to do with 'the cloud' and everything to do with bad infrastructure policy and incompetent IT staff.

  • by tjstork ( 137384 ) <todd@bandrowsky.gmail@com> on Monday October 12, 2009 @09:57AM (#29718987) Homepage Journal

    When you cut through the "cloud", if you look into the center of things, you see that the so-called modern "cloud" computing environment is a giant computer(s), surrounded by high powered priestly geeks, doling out resources to everyone, completely centralized. The priests have some new tricks to entertain the masses with, but there's nothing fundamentally different between cloud computing and IBM's vision of computing in the 1960s.

    • Re: (Score:3, Insightful)

      I wish I had mod points because that is the best summation of "cloud" computing I have read yet. Every few years some technological development causes this computing paradigm to be brought up as the "new thing" in computing. Every time this happens there are all these people talking about how it is the "wave of the future" and that all computing will go that way. After a few years, people realize that it has the same limitations that caused it to be rejected except for those limited industries and applicati
    • Re: (Score:3, Insightful)

      There is one difference.

      In previous decades, for the most part, the company that operated the computing center considered the data to be valuable, and took great care to prevent data loss. They knew that the hardware could fail, and so they made multiple copies of each data file. They did backups, and they checked and tested the backups. Most even stored some copies off-site to hedge against the possibility of catastrophic loss of the entire data center.

      At present time, many young people have never seen

    • Straight from the big blue school of "please buy our cloud". Some pretty significant differences using Amazon as an example

      1) Self-provisioning - ever tried to get an LPAR commissioned on a a mainframe yourself?
      2) Multi-domain failover - want an EU and a US instance? Want to manage failover between them? Yup you can
      3) Speed of expansion - The number of LPARs and MIPS isn't fixed you don't have to fight to the death for another 1 hour slice
      4) Separation of Storage and compute - You've got you on-compute d

    • In other works, Timeshare 2.0. Funny is I remember the old timers talking about that when I started in IT 15 years ago and exactly how well it worked then....and I've come to the conclusion it work just about as well now. The Cloud sounds like a great idea. I have friends who work at a company that switched to a thin-client/cloud system. It's great, until the network switch or router for a floor goes wonky. Then all of a sudden you have 200 employees dead in the water unable to do anything productive.

  • What has this got to do with the "cloud"? If your data is critical enough, do it in house or mirror/slave/backup across two or more vendors. The probability of chain failure at one vendor's site alone is much higher than when you use several. The required isolation and separation of your components will also benefit your overall architecture.
  • No true scotsman (Score:5, Insightful)

    by vadim_t ( 324782 ) on Monday October 12, 2009 @09:59AM (#29719007) Homepage

    This is awfully convenient. Something that at least to my eyes looks a lot like a cloud crashes. Cloud pundits announce:

    "if it loses your data - it's not a cloud".

    So if Amazon's S3 ever fails horribly and loses everybody's data, then it wasn't a cloud either.

  • That difference being that when you're doing things in your own data center, your own people can evaluate what's actually being done. With cloud computing you cannot do that. In both cases you have similar tensions between thoroughness and cost, but in the the one your company gets to make the decisions and verify that they're carried out, in the other you do not.

  • by FauxReal ( 653820 ) on Monday October 12, 2009 @10:02AM (#29719053)

    But some cloud technologists insist data center failures are not cloud failures. Is this distinction meaningful?

    Do you think the customer will want to argue semantics with you after you've lose their data?

    • Do you think the customer will want to argue semantics with you after you've lose their data?

      If it was me, I could be tempted. Let's have a go at it...

      To start with, I would have said that, technically, the cloud can't meaningfully fail at all. I suppose it could be down when you needed to access your data, but the connectivity part of the deal (which is the actual cloud) is going to come back up.

      The trouble is that no one buys the "cloud". They buy services delivered over the cloud. So the question

  • by rickb928 ( 945187 ) on Monday October 12, 2009 @10:08AM (#29719103) Homepage Journal

    I'm a TMO subscriber, and I love them, so this is painful. And my sister-in-law is a longtime Sidekick user, so she's in a special agony.

    But T-Mobile is in a potentially no-win situation. They obviously have to believe Danger/Microsoft that they have good processes to avoid and recover from such failures. They didn't, and now TMO is probably going to take the hit. On one hand, they should - if the service is important, take responsibility and ensure management. On the other hand, they have good assurances, so hey, how much is enough?

    BlackBerry users, you should take note. Rim differs only in scale. Ahd, you hope, depth of resilience. Not that RIM hasn't had outages, though not total failure yet.

    TMO may have to tell their Sidekick users to be prepared for the inevitable restore, and of course, work with Danger/Microsoft to re-establish service (even though they don't provide service, D/M does), and of course some money compensation no matter how inadequate.

    And maybe offer them shiny new myTouch3Gs to give the disillusioned Sidekick users an option with a marginally better track record.

    No, wait, that isn't right. I've had to wipe my G1 every update, and some apps don't have a way to save data. They just don't.

    I'm glad I never got on the Sidekick train, but I have no hope that this won't some day hit me. Do you suppose the next major Sidekick update will include data backup? :)

  • This is a service run by Microsoft. Microsoft is a bit hostile to consumers. It would be ironic and sad if Microsoft's failure to maintain the Sidekick service gets blamed on the faceless "Cloud" and it hurts Microsoft's competitors.
  • People don't like (potential) failures that are out of their control. According to statistics, my data on my personal HDD or phone is more likely to fail than data on an average "cloud" storage array, but I can keep my HDDs and phone from harm and monitor and test backups. Same idea with automobiles versus airplanes: cars are supposedly more dangerous, but we're each our own pilot. Airplane crashes are scary partly because of the size, but also because in an emergency, we know there's nothing we can do.
  • It seems to me that the issue lies in whether the data pieces are on the cloud, or if just the programs are. If I lose the ability to edit a Word document from Office-For-Cloud but I have the file stored locally, I grumble that 'the idiots who run the thing' broke the program, and wait for the 'smart guy white knights' to come fix it for them. But in this case I'm holding those bits (exclusively, or a copy) so I know the data are safe. Nuke the server from orbit, for all I care - I'm annoyed that I lost

  • predictably doomed (Score:4, Insightful)

    by jipn4 ( 1367823 ) on Monday October 12, 2009 @10:15AM (#29719181)

    Danger held your data hostage from the start and didn't provide backup. Then, when Microsoft took them over, it was clear that they were going to mess with the service and servers. No backup + Microsoft mucking with the servers = kiss your data goodbye.

    But that's no more an indictment of hosted services or "cloud computing" than a Windows BSOD is an indictment of desktop computing. Microsoft screwed up, and quite predictably, too.

  • by John Hasler ( 414242 ) on Monday October 12, 2009 @10:16AM (#29719191) Homepage

    Just define away your problems. ROFL.

  • by Prototerm ( 762512 ) on Monday October 12, 2009 @10:16AM (#29719193)

    Why on Earth would you trust your valuable data (and if it wasn't valuable to you, why keep it in the first place?) to someone else, someone who doesn't answer to the same people you do? I have always thought that "the cloud" is an epic fail waiting to happen. As a concept, it makes no sense. It's a scheme worthy of Professor Harold Hill himself.

    You want your data safe? You want it backed up properly? Don't want to lose it? Then put it on your own hardware and take care of it yourself. Don't leave it to someone else to save your bacon when something goes wrong. Because, in the end, they don't care about you. You're just a monthly fee to them, and the agreement/contract/whatever you signed with them absolves them of all responsibility.

  • The people running the cloud and the data center can bicker till the cows come home, but to the customer, someone says, "trust me, I can let you run your apps and store your data better than if you did it yourself," and then *poof*, it's all gone. Since the customer only interfaces with the company managing the cloud services, the customer sees it as a cloud services failure.

    If the cloud company wants to tell all their customers, "It wasn't our fuck-up, it was this other company that we pay to store your d

  • Sort of (Score:3, Insightful)

    by Kirby ( 19886 ) on Monday October 12, 2009 @10:22AM (#29719255) Homepage

    Well, any time you're storing data in a central place, you have a greater consequence of failure. That's a downside of "cloud computing", or any web application that stores data in a database too.

    The alternative approach is everyone to have a local version of their data, which will be lost by individuals all the time but not by everyone all at once.

    Obviously, if you have a server that's a single point of failure for your company, and you botch a maintenance, something went very wrong. And not having a backup - it seems strange for a company that's been around the block a few times and has big resources behind it. You have to write this off as more of a specific failure and not a failure of the concept of storing data on a remote server.

    I do have a good friend that works for Danger - I really don't envy the week he must be having.

  • That's the real killer. Even if you had all your data loaded on the phone, lose power and poof! With no mechanism to make local backups, you're utterly at the mercy of the cloud.

    I've got a Pre, which is a cloud device, but if my battery dies the same time as the remote servers my data's safe for quite a while. Once the battery recharges I can get one of the sync apps to offload my data. If I were more paranoid I'd get one now but I try to make my own archives straight from Google, webmail, et al.

  • by snspdaarf ( 1314399 ) on Monday October 12, 2009 @10:42AM (#29719537)
    The best part of TFA is the comment below from their version of an A/C:

    Cloud architecture shards data

    In this case it certainly did.

  • TOS (Score:4, Funny)

    by ei4anb ( 625481 ) on Monday October 12, 2009 @10:43AM (#29719559)
    The TOS probably made the users aware that "your data is in Danger" so they can't complain now :-)
  • Cloud computing is trusting Someone Else to take care of your data. While there are good, trustworthy organizations out there, for me, it comes down to the old adage of "if you want to ensure something is done right, do it yourself."

    Networks are great for communication, collaboration, and sharing information not available locally (Wikipedia, online scholarly journals, etc) -- but for me, putting word processors online doesn't pass the laugh test. No matter how reliable your network is, if you already have
  • by swschrad ( 312009 ) on Monday October 12, 2009 @10:48AM (#29719635) Homepage Journal

    not just stuffy history book stuff or national security, IMPHO it fully applies to "the cloud."

    if Microsoft can't even build a robust cloud environment, that experiment is done.

    "danger," indeed.

  • by Locutus ( 9039 ) on Monday October 12, 2009 @10:49AM (#29719653)
    Microsoft gutted Danger and left it on life support but all the while they lead their customers( T-Mobile and users ) to believe Danger was thriving and doing fine. Wow, doesn't that sound like Paulson in early 2007 having stated that the banking system was just fine? The difference, Paulson really was clueless while Microsoft knew darn well they'd pulled most of Dangers developers over to their project Pink.

    This is what should be up in lights with flares and fireworks and not anything about how bad/good cloud computing is. But once again, there is Microsoft at the wheel and yet the press is saying "pay no attention to that man behind the curtain".

    And this interesting in tying this to cloud computing sounds eerily familiar since I just read how Steve Ballmer was bashing IBM for not running their business correctly. Basically, paying too much attention to software and cloud computing and he's all amped about this right when yet another Microsoft failure proves how bad they are at this. Could be spin control so watch for more of the same if it is.

    LoB
  • But some cloud technologists insist data center failures are not cloud failures. Is this distinction meaningful?

    Of course its meaningful. If you have a local server that you own, and you choose not to back it up, and it fails with a complete loss of data, that isn't primarily a problem with owning a local server, or with the particular server operating system (though there may be factors associated with either of those that contribute to the crash), its a problem with you choosing not to back up data.

    If you

  • by viralMeme ( 1461143 ) on Monday October 12, 2009 @11:06AM (#29719887)
    "According to some reports, the failure was due to a SAN (Storage Area Network [neowin.net]) gone wrong at Microsoft's end. It is claimed that Microsoft does not have a working backup of some of the data that has gone missing from customers devices. The SAN upgrade is rumoured to have been outsourced to Hitachi to complete"

    "Microsoft, possibly trying to compensate for lost and / or laid-off Danger employees, outsources [engadget.com] an upgrade of its Sidekick SAN to Hitachi, which -- for reasons unknown -- fails to make a backup before starting"
  • Don't know why there's no sun up in the sky
    Stormy weather
    Since my data and I ain't together,
    Keeps deletin' all the time

  • I had installed Google Gears as a precaution against Google losing my email. Does this work. Gears does have a copy on your local computer. Is this sufficient?

  • Just today in the Chicago Tribune there is an article [chicagotribune.com] about a new Microsoft "cloud computing" datacenter in the suburbs. It goes on and on about how great cloud computing is and how visionary Microsoft is for their work in this field (*snicker*). They briefly mention some other companies, I think one called "google" and yahoo or whatever, that are following in Microsoft's footsteps into this brave new world of internet-based applications.

    Given that, I doubt MS planned the Danger/Sidekick fiasco in order
  • A Datacenter is the backbone of a Cloud. Cloud Computing is 100% reliable -- until of course, when it fails, and then Marketing/Tech Support will tell you this was a datacenter problem and ask you if you saved a backup.

  • Microsoft today implemented its 100% Data Confidentiality package for T-Mobile Sidekick, comprehensively protecting users' contacts, email and messages from any possible attacker.

    "Our data security is impenetrable," said Steve Ballmer, "and will reassure everyone of the data integrity of our Windows Azure Screen Of Death cloud computing and Windows Mobile initiatives."

    Microsoft plans to leverage the new confidentiality mechanism to finally purge the horror of Vista from the face of the earth, in the sam

  • As I'm sure somebody pointed out here, the Sidekick data loss fiasco occurred largely because nobody had off-site tape backups on hand and nobody wanted to do a backup BEFORE performing their big upgrades.

    It's that simple.

  • What exactly is the difference between today's "cloud computing" and yesterday's "internet-based services?"

    I'm sure this question is often asked, considering that every single web site is a file stored on a remote computer which by way of internet services is displayed on computer screens everywhere. Additionally, people have been uploading data to remote storage services since the late 90's with XDrive and its precedessors, but these were never known as "cloud computing" then...

    • Re: (Score:3, Insightful)

      by James McP ( 3700 )

      "Real" cloud computing is supposed to be based on a mesh of geographically diverse, redundant servers each carrying various subsets of the data. Think RAID5 for servers, with each partition located in a different part of the world and on different networks.

      Which means it is nothing more than an internet based service with five 9s of reliability and availability.

      However it is an *expensive* internet based service so it needs a new moniker. But without a "Cloud Computing Consortium" with ownership of the t

"Marriage is low down, but you spend the rest of your life paying for it." -- Baskins

Working...