Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Bug Cellphones Data Storage

Server Failure Destroys Sidekick Users' Backup Data 304

Expanding on the T-Mobile data loss mentioned in an update to an earlier story, reader stigmato writes "T-Mobile's popular Sidekick brand of devices and their users are facing a data loss crisis. According to the T-Mobile community forums, Microsoft/Danger has suffered a catastrophic server failure that has resulted in the loss of all personal data not stored on the phones. They are advising users not to turn off their phones, reset them or let the batteries die in them for fear of losing what data remains on the devices. Microsoft/Danger has stated that they cannot recover the data but are still trying. Already people are clamoring for a lawsuit. Should we continue to trust cloud computing content providers with our personal information? Perhaps they should have used ZFS or btrfs for their servers."
This discussion has been archived. No new comments can be posted.

Server Failure Destroys Sidekick Users' Backup Data

Comments Filter:
  • by Anonymous Coward on Sunday October 11, 2009 @04:33AM (#29709753)

    homemade cell phone porn videos cried out and then were silenced.

  • by Manip ( 656104 ) on Sunday October 11, 2009 @04:34AM (#29709757)

    This seems a rather silly point to make. I know this is Slashdot and we have to suggest Open Source alternatives but throwing out random file systems as a suggestion to fix poor management and HARDWARE issues is some place between ignorant and silly.

    Perhaps they should have had at least mirrored or stripped raid, with an off-site backup every week or so?

    • by timmarhy ( 659436 ) on Sunday October 11, 2009 @04:46AM (#29709795)
      retarded comments like that are the reason these zealots aren't taken seriously in the enterprise.

      i'd hazard a guess that the offsite backups were corrupted as well somehow or were silently failing.

      • by sopssa ( 1498795 ) * <sopssa@email.com> on Sunday October 11, 2009 @05:12AM (#29709901) Journal

        Exactly, this can be a software bug too and that could possibly easily destroy or corrupt backup data too. I really doubt this service was ran without backups.

        The type of filesystem has nothing to do with this.

        • by Znork ( 31774 ) on Sunday October 11, 2009 @05:47AM (#29710045)

          I really doubt this service was ran without backups.

          Knowing 'enterprise' backups I'd bet there was at least a backup client installed and running. However, I'm equally sure that the backups were, at best, tested once in a disaster recovery exercise and were otherwise never verified.

          Further, responsibility would probably be shared between a storage department, a server operations department and an application management department, neatly ensuring that no single person or function is in the position to even know what data is supposed to be backed up, what limitations there are to ensure consistency (cold/hot/inc/etc), to monitor that that's actually what does happen and that it keeps happening as the application and server configuration evolves.

          Backups of dubious value do not seem to be a rarity in enterprise settings.

          • by asaul ( 98023 ) on Sunday October 11, 2009 @06:19AM (#29710161)

            Dubious backups? Depends. We had a system which was a 6TB cluster that was notoriously difficult to back up. This went on for years, it took too long, failures caused issues downstream etc. Then someone took a moment to realise that the application was not capable of re-using that 6Tb of data if it was restored - once the data came in it was processed and archived. To recover the application all they had to do was backup a few gig of config and binaries, and restart slurping data from upstream again. Viola - backup stripped down to nothing, 6TB a day of data less to backup, and next to no failures as it was now so quick to backup.

            Then there is the case of an application which the vendor and application developer signed off on using a backup solution using a daily BCV snapshot. What they failed to tell us was application not only held data in a database, but in a 6G binary blob file buried deep in the application filesystem. If the database and the binary where out of sync in any way, it could mean missed or replayed transactions or generally that the application was inconsistant. As this was an order management platform, that was bad. You can guess the day we found out about this dependancy.... yup, data corruption, bad vendor advice screwed the binary file and all we had to go on was a backup some 23 hours old where the database was backed up an hour after the application. Because of a corresponding database SNAFU, the recover point was actually another day before that, with the database having to be rolled forward. It was at this point we found out the despite the signed off backup solution, the vendors documented recommendations (that were not supplied to us) was that the only good backup was a cold application one - not possible on a core order platform. Thankfully after some 56 hours of solid work the application vendor managed to help sort the issue out and the restore from backup was not actually needed. The backups were never really tested as the DR solution worked on SRDF - the DR consideration for data corruption was never really part of the design (from a very high level, not just this platform).

            So there you have it. Two dubious Enterprise backups - one not needed, the other not usable.

          • Re: (Score:2, Insightful)

            by cupantae ( 1304123 )

            When I read that you had quoted "I really doubt this service was ran without backups," I twitched and the thought
            I know it's bad grammar, but let's just ignore it, please
            was loud in my ears. I was so relieved when I saw that you weren't mentioning it. I don't know what this makes me, but it happens all the time. I'm definitely bothered by poor grammar and spelling, but I want no one to ever point it out.

        • by Rakshasa Taisab ( 244699 ) on Sunday October 11, 2009 @06:05AM (#29710109) Homepage
          A bug that sneaks into the two or three offsite locations, destroying the tapes which are randomly checked before being shipped to ensure they contain valid data? Really nasty those bugs.
          • Re: (Score:3, Insightful)

            I've had something like that happen. The recovery system for a partner had never been tested with a _full_ recovery, only with recovering a few selected files. But because someone decided to get cute with the backup system to pick and choose which targets got backed up, individual directories each got their own backup target. Thousands and thousands of them. And the backup system had a single tape drive, not a changer.

            The result was that to restore the filesystem, the tapes had to be swapped in and out to g

            • Re: (Score:3, Funny)

              Something tells me you have grey hair and wrinkles. And I say that in a good way.

              • by Antique Geekmeister ( 740220 ) on Sunday October 11, 2009 @10:41AM (#29711325)

                It's not the gray hair (or what is left of it!), and those aren't wrinkles. They're laugh lines from the terrific amusement when some youngster ignores the hard-won lessons of the last millennium, especially when they have to call me or someone like me to clean up the mess. The laugh lines are especially deep from when I collected a paper trail to show where their supervisor ignored my written warnings about the danger: those are used with caution, but can be very, very handy.

        • by Jezza ( 39441 ) on Sunday October 11, 2009 @08:10AM (#29710623)

          The kind of filesystem have help - I'm familiar with ZFS concepts so I'll stick to those:

          In ZFS when you write to a file you don't write over the pre-exisiting data, you write elsewhere then that gets mapped in upon success, the old data is still there and you can see the aged mapping (you know what was there). Now you can at this point recycle this space. However, you can switch this pruning off, now you have a complete record of everything that was ever done on the disk. To stop it ever running out of space I can either: Add disks to the disk-pool to stop that, or prune very old data (older than a give age - maybe 6 months?).

          So it helps.

      • by malchus842 ( 741252 ) on Sunday October 11, 2009 @05:49AM (#29710049)

        One reason why our corporate policy is that we actually have to validate backups for every system on a regular basis (this means doing a full restore of a tape called from off-site), where the regularity is directly proportional to the criticality of the system. The more critical, the more often we test. On our iSeries, they restore the weekly backup tape EVERY week on the QA server - both for the purposes of refreshing it, AND to validate the backups. We also have a quarterly 'random' test where a system is chosen randomly and it must be recovered from bare metal using only our standard procedures + the backup tape.

        We've discovered all kinds of strangeness with backup tapes through the years. Our Tier 1 systems have completely separate instances in geographically diverse areas, with data-replication.

        Granted, this isn't cheap, but our data isn't either.

        • I've always been amazed that tape is trusted as much as it is. It seem (anecdotally at least) to have a disproportionately high failure rate.
          • by jimicus ( 737525 ) on Sunday October 11, 2009 @08:08AM (#29710613)

            I've always been amazed that tape is trusted as much as it is. It seem (anecdotally at least) to have a disproportionately high failure rate.

            I'm not sure that's the problem so much - after all, LTO has a read head positioned directly after the write head and automatically verifies as it goes along. A tape error is dead easy to spot.

            There are a number of places where things can fall apart, and tapes don't even need to come into the matter:

            • Nobody checking the logs
            • Failure to understand the processes necessary to get a good backup. (You can't just dump the files that comprise a database to disk - you must either quiesce the database or use the DBMS' inbuilt backup routine - or you will wind up with inconsistent files and hence an inconsistent database. You'd be amazed how many people don't understand this.)
            • Failure to maintain backup processes. (When you moved the database to another disk because you were running out of space, you did update your backup process? Right?)
            • Not doing any test restores.
            • Not doing enough test restores, or doing them carefully enough. (If you're unlucky, your database will come back up OK even though you didn't quiesce it before carrying out the backup. Why do I say unlucky? Well, if it had not come up OK, you'd know immediately that there was a problem with your process. Then once the database is back up, make sure you check the restored data to ensure that recent transactions which should be on the backup actually are).
            • Re: (Score:3, Informative)

              by AK Marc ( 707885 )
              Ever have a tape drive with mis alligned heads? That one drive and only that one drive will be able to read those tapes, and sometimes even it can't read them after the tape is ejected, but will show OK on a verify done before the tape is ejected. You either have a verified backup that can't be used, or a pile of tapes that are completely useless if that drive ever fails.

              I found one of these when doing a backup/restore to upgrade a server (backup the data from ServerA and restore the data on ServerB). I
        • The value of data (Score:5, Insightful)

          by symbolset ( 646467 ) on Sunday October 11, 2009 @12:49PM (#29712067) Journal

          Granted, this isn't cheap, but our data isn't either.

          Microsoft bought Danger for half a billion dollars. Current estimates of the value of this data are roughly... half a billion dollars, plus a little. There's little doubt that in addition to destroying the entire value of the acquisition they've created a connection between "Microsoft", "Danger" and "data loss". In their release T-Mobile isn't being shy about tying those things together. Not good. That's going to have impacts even for some completely unrelated cloud-based products like Azure [microsoft.com].

          Somebody's about to get a really awkward performance review.

      • by mike260 ( 224212 ) on Sunday October 11, 2009 @05:55AM (#29710067)

        There are plausible reports as to how this happened here [hiptop3.com].

        tl;dr - They tried upgrading their SAN without making a backup first, and the upgrade somehow hosed the entire SAN.

        • by bertok ( 226922 ) on Sunday October 11, 2009 @06:57AM (#29710299)

          There are plausible reports as to how this happened here [hiptop3.com].

          tl;dr - They tried upgrading their SAN without making a backup first, and the upgrade somehow hosed the entire SAN.

          That's the thing that has always worried me most about SANs: you have all your eggs in one basket. No matter how redundant or reliable the hardware is, one bad update or trigger-happy admin can cause the instant loss of all your data. That's only slightly better than having your data center burn down. You still have your hardware, but a total restore like that can be a nightmare. I've heard somewhere that 80% of corporations couldn't recover from a scenario like that.

          Here's some fun numbers: a typical tape restore runs at something like 70MB/sec, if you're lucky, per tape drive. Some small low-end SANs that I see people buying these days are 10TB or bigger. At those speeds, it takes 40 hours to restore the complete system. What's worse is that it doesn't scale all that well either, you can get more drives, but the storage controllers and back-end FC loops become a limit. If you have some big cloud provider scenario, a complete restore could take days, or even weeks.

          What's scary is that mirroring or off-site replicas don't help. If your array starts writing bad blocks, those will get mirrored also.

          • by vk2 ( 753291 ) on Sunday October 11, 2009 @07:36AM (#29710461) Journal
            Thats why you have logical redundancies. I work for a fortune 10 company and this is a standard practice for all mission critical applications. The application has be to geographically redundant with install base at least at 3 data centers (ATL,SEA and DLS). Different SAN technology at each DC. All Oracle databases have 2 physical dataguard configuration with 4 hours and 8 hours latency (to guard against user errors) and all J2EE apps hard configured to switch connections from one db to the other almost on the fly or with a reboot. Some really really critical databases have all this and transaction duplication via Goldengate to remote databases to off load reporting queries. We have had issues where SAs screwed up allocating LUNs and ended up f*cking up the file systems but we recovered in every scenario even a 30 TB DB restore over 2 days.

            Its amazing a consumer serving company like T-Mobile risked itself by hosting their application on Microsoft platform;. Furthermore where is the DR in all this? Who the F*ck in the right mind fiddle something on SAN without confirming a full backup of all applications/databases? It appears that Hitachi and Microsoft are at fault here (if SAN maintenance is the root cause of this failure) but T-Mobile is the fool allowing these companies to ruin their data. Not only there won't be any consequences because of this issue to MS or Hitachi - T-Mobile will be pouring in more money to fly in the MS and Hitachi consultants.
            • Re: (Score:3, Informative)

              by uncleFester ( 29998 )

              "Who the F*ck in the right mind fiddle something on SAN without confirming a full backup of all applications/databases?

              people who drink the kool-aid whenever vendors of said products repeatedly swear up and down all their tasks/patching/operations are 'totally no-impact and no-visibility changes.' combine that with people unwilling to take downtime or spend $$$ to properly protect the contents ahead of time and you have just cooked a recipe for disaster.

              -r (not speaking from personal experience.. of course.. :/ )

            • by xswl0931 ( 562013 ) on Sunday October 11, 2009 @10:05AM (#29711131)

              Looking at the timeframe that Danger was acquired by MSFT and that the Danger OS was likely based on NetBSD (http://en.wikipedia.org/wiki/Danger_Hiptop), it's more likely that Danger was still using NetBSD as their Server Software and this was merely a process issue. Blaming it on the "Microsoft Platform" without any real data is just spreading FUD.

          • Re: (Score:3, Interesting)

            by JasonBee ( 622390 )

            In our environment, a large government shop, our data volumes are capped at around 1 TB of storage for that very reason. Between the SAN, and the tape backups...they just simply have to create a physical cutoff point for data storage due to those onerous recovery periods.

            There is nothing wrong in our shop with having TWO 1 TB volumes, but you will never get approved to have one single 2TB. Problem solved...at least for file storage. Database backups are managed via other mechanisms like replication.

          • Re: (Score:3, Informative)

            by Tweezer ( 83980 )

            Even with a SAN you need to limit volumes sizes to whatever size you can restore within the acceptable restoration window. There are also those times where you just want to run a chkdsk and if the volume is too big, it takes too long.
            That being said, I can't believe they didn't have any backup. Even if they skipped the pre-upgrade backup, they should have had one from last night/week/month. Any of those options would be better than nothing. I have to assume they were doing backup to disk on the same SAN

        • by Anonymous Coward

          I work in telecom at a different provider. SAN upgrades are performed by the SAN vendor and, IME, they always demand a complete backup prior to starting any work unless the customer demands otherwise. If the customer doesn't want the backup, we always had to get a Sr VP to sign off. There were about 10 Sr VPs in the company - not like at a bank where everyone is a VP.

          Usually, we would perform firmware upgrades only when migrating from old SAN equipment into new. The old equipment would be upgraded and used

      • Re: (Score:3, Insightful)

        by Anonymous Coward

        Repeat after me, you haven't got backups unless you've tested RESTORES.

      • by petes_PoV ( 912422 ) on Sunday October 11, 2009 @06:35AM (#29710209)
        It's not a backup unless you can prove it will restore. Until then it's just a waste of tape, or disk, and time

        The point about backups is not to tick the box saying "taken backup?" but to provide your business / customers / whatever with a reliable last resort for restoring almost all their data. If you don't have 100% certainty that it will work, you don't have a backup.

        • Re: (Score:3, Interesting)

          by drjzzz ( 150299 )

          It's not a backup unless you can prove it will restore. Until then it's just a waste of tape, or disk, and time

          True. There's a similar problem in biological research, where people think they have secured frozen samples but they haven't tested whether the samples are valuable after thawing. For example, frozen cells might not be viable, or RNA might be degraded. Too often the samples are just wasting freezer space. Anybody can freeze (or backup), the question is whether what you thaw (restore) is valuable.

        • To the standby or testing system. Our staging/testing systems all run yesterday's production data, restored from the most recent backup.

          if your backups don't work then neither will your test/staging server... Which will be noticed.

          What do you get?
          * Backups tested every day.
          * A test/staging/standby system identical to the production.
          * Something the business can run all the crappy queries they like against without affecting the production system.

      • by CODiNE ( 27417 )

        What's really retarded is that using zfs would prevent bitrot and warn you of impending or intermittent hardware failures but is seen as OSS zealotry by people who haven't thought out the problem.

        • by jimicus ( 737525 )

          I wouldn't say that, but ZFS is still a little young for my liking. There are plenty of horror stories concerning data loss, and more to the point plenty of recent horror stories.

      • Re: (Score:3, Interesting)

        by runningduck ( 810975 )
        At the very least they should have been segmenting customer data. How could a single failure outside of a ten mile wide asteroid hit wipe out all customer data? Was everything stored in a single giant registry? I see this a one of the single greatest failings in current system design. Top professionals trust tools more than data design and management processes. I would say the same thing if they were using ZFS or btrfs. Technology is NOT a solution. Technology is at most a tool that contributes to an
    • by rastilin ( 752802 ) on Sunday October 11, 2009 @04:50AM (#29709807)

      This seems a rather silly point to make. I know this is Slashdot and we have to suggest Open Source alternatives but throwing out random file systems as a suggestion to fix poor management and HARDWARE issues is some place between ignorant and silly.

      Not as silly as it might appear. One of ZFS's main functions is that it can compensate for some degree of hardware failure.

      • by WarlockD ( 623872 ) on Sunday October 11, 2009 @05:55AM (#29710063)
        Ever try to restore from a ZFS corruption? It IS easy and it can be done. However...

        What if the data was on an EMC storage array and the tech told them its all lost? What if your dealing with a Teir 1 vender (I am looking at you Dell Equallogic) that swears UP and DOWN that there is no way to recover the system after a second drive out of a RAID 5 has been pulled? Hell, try just a standard raid 5 card from a Teir 1 vender. (Not talking about calling like 3ware support directly, they are honestly good and recovered a few arrays with them)

        I "suspect" that they are running it off a storage array that failed big time, or lost the LUN, or just someone decided to die and take the server with it. There is just to much we don't know. Was Dagger installed on multi-servers? Was it clustered? Is it a cloud system? Does it run its own storage system or requires additional hardware?

        But you know what? ZFS, EMC even Windows 2008, All moot. Why? WHERE ARE THE TAPE BACKUPS?!?! SERIOUSLY. The ONLY way they have lost ALL that data was that they didn't have backup solution. Otherwise their "press release" would say "...however we will be restoring the data from last week/months tapes..."

        I do like how they keep saying "Microsoft/Danger" as if they are at fault. A good admin would expect a new car would catch fire and run into a bus full of nuns.
        • Re: (Score:3, Interesting)

          by Cylix ( 55374 )

          Well the first problem was the EMC storage array.

          The second problem is believing the tech when he says the data cannot be reclaimed.

          The third problem is using a simple raid 5 volume on a great deal of data. Multiple drives fail all the time! Hell, racks of servers fail in unison.

          Even if the DCB data is corrupted this can be corrected even on a large SAN.

          All or part of the data is generally recoverable.

          Either this was an impossibly horribly managed install or something very complex has happened. Generally, t

    • by gravos ( 912628 ) on Sunday October 11, 2009 @05:01AM (#29709857) Homepage
      The current major cloud providers (Google and Amazon) both replicate your permanent data to multiple hard disks (Google: 3, not sure about Amazon) in multiple areas of the datacenter, and I know Google is looking at providing replication to different datacenters (which is more complex than replication in the same datacenter because of the time delay).
    • Have ZFS/btrfs developed tools to undelete or rescue files? It is pretty hopeless for ext[234] in my experience.

      • Re: (Score:3, Informative)

        by myxiplx ( 906307 )

        Yes, it's called a snapshot. Take a snapshot and you can either roll the entire system back to that point in time, or just browse its contents and extract the files you want.

    • by Jezza ( 39441 )

      Err... This is cloud computing, right? Why do you have off-site backups every week or so?! The data should be stored in multiple geographic locations ALL THE TIME. The ZFS suggestion isn't as dumb as you might think, you tell ZFS not to prune old data, then if stuff gets "deleted" it's still on the disk (I won't bore you with an explanation here). You're right ZFS won't help you against something that destroys (physically) the disks (so multiple locations are required) but it will help you against hacking o

  • A server failure? (Score:4, Informative)

    by corsec67 ( 627446 ) on Sunday October 11, 2009 @04:36AM (#29709765) Homepage Journal

    A server failure caused all of the data to be lost?

    No backups? Not even a spare server with a mirror of the data? No servers in different places? No off-site backup strategy?

    As an aside, why would that data be stored in volatile non-battery backed up ram? All of my graphing calculators have a special battery to keep the ram, and they aren't even supposed to store important stuff. Flash is cheap enough these days, why should simply removing the battery cause important data to be lost?

    • by Hadlock ( 143607 ) on Sunday October 11, 2009 @04:58AM (#29709847) Homepage Journal

      Reportedly sidekicks are thin clients, other than making phone calls, everything on the phone is saved on the server side. Which is a special kind of retarded, in today's world where a blackberry performs all the same functions, and provides a local backup feature. But yeah as for the backups, all your backups are worthless if your data backup code is flawed, and nobody ever checks the backup tapes. When MS bought the service, they probably changed the location the servers were in, plugged everything back in, and kept going. I imagine a project like that would be on a short timetable, and "checking to see that the backup tapes are really being backed up to" is low on the priority list when the service is already live.

      • Re:A server failure? (Score:5, Informative)

        by Serious Callers Only ( 1022605 ) on Sunday October 11, 2009 @05:44AM (#29710031)

        There's some interesting background leaks on the takeover of Danger in this article [appleinsider.com] which seem to imply they cut a lot of staff, and gutted the company, which is now running on a skeleton staff. So I guess it's not too surprising when this sort of mistake is made. Not the most reliable source, but they did definitely cut a lot of danger staff after the acquisition.

      • Reportedly sidekicks are thin clients, other than making phone calls, everything on the phone is saved on the server side. Which is a special kind of retarded

        Isn't that also how Android works?

        I mean sure, the apps and such are on internal flash, but it's a different story for your "important" data such as email or contacts list. Heck, as I've learned, one can't even read one's existing ("synced") email without a working web connection. How they can call that "syncing", and what it's doing besides simple header indexing, is beyond me.

        This is another reason I am loath to trust "the cloud" -- if I know I can be self-sufficient (in a data accessibility context), tha

        • by Troed ( 102527 )

          Isn't that also how Android works?

          No.

        • by RedK ( 112790 ) on Sunday October 11, 2009 @08:33AM (#29710723)
          No, it's not how Android works, or how the iPhone works either. You can have cloud enabled applications, but you can also have local storage based ones without any problems. There is nothing in the SDKs that force you to use the cloud for storage at all.
        • Re: (Score:3, Informative)

          by hedwards ( 940851 )
          It's not as much of an issue. You might be using a product for which Data Liberation Front [dataliberation.org] hasn't gotten to, but Google does have people working on any of those applications to make it possible to make ones own back up. I'm not sure what specifically triggered that, but I keep a backup of any important information on my computer which is backed up to my local backup mirror and remotely.
    • by PolygamousRanchKid ( 1290638 ) on Sunday October 11, 2009 @05:15AM (#29709909)

      A server failure caused all of the data to be lost?

      Maybe it was the server failure . . . maybe they only had one . . . ?

  • by christwohig ( 1579191 ) on Sunday October 11, 2009 @04:38AM (#29709775)
    So are we saying microsoft didn't have a backup? what about a offsite backup? Who wants to bet they were using their own backup solution? if they had a decent storage array they could have had snapshots and offsite replica's to restore from
  • Sidekick (Score:5, Funny)

    by nadaou ( 535365 ) on Sunday October 11, 2009 @04:38AM (#29709777) Homepage

    shit, is that TSR still hanging around? goodness!

    If the above means anything to you, "apt-get install joe mc" will make you smile as well.

    • Re: (Score:3, Informative)

      by tangent3 ( 449222 )

      Ohh yes.. Need an ASCII table? It's just a Ctrl-Alt away

    • Means what it said (Score:3, Informative)

      by SuperKendall ( 25149 )

      shit, is that TSR still hanging around? goodness!

      Dude, what part of "Stay Resident" did you not understand. It's not like selling your computer rids you of it.

      That's why I never ran them, nor consorted with Deamons.

  • Backups? (Score:3, Interesting)

    by ipsi ( 1181557 ) on Sunday October 11, 2009 @04:41AM (#29709789)

    Either this is a really, really serious meltdown which completely killed not only the server but all their backups as well (and what're the chances of that?), or their IT guys have been really, really slack and just didn't make any backups...

    Guess they should have used a better smartphone, like *anything* else on the market... Even the cloud-centric Pre will still work if you don't have access to the Cloud - even if Google and/or Palm dies, you'll still have all your information on your phone! Jesus... Doesn't inspire confidence...

    • Re:Backups? (Score:5, Insightful)

      by TheSunborn ( 68004 ) <mtilsted@NoSPAm.gmail.com> on Sunday October 11, 2009 @04:47AM (#29709799)

      Or this was really a software error, and the backup servers in an other datacenter, just copied the faulty data/delete command.

      They should really be far to big to have all their data stored in a single datacenter with no offsite backup. (Or they should have an entry on thedailywtf.com)

  • by delta98 ( 619010 ) on Sunday October 11, 2009 @04:52AM (#29709817)
    'nuff said.
  • by tres ( 151637 ) on Sunday October 11, 2009 @04:57AM (#29709837) Homepage

    This is an issue of irresponsibility. Plain and Simple. The company responsible for maintaining the data should -- at the very least -- have had some full system backup from last month. If they had some old backup somewhere at least you could chalk it up to systems failure or bad backup tape or bad admin or something.

    But the fact that there is no backup anywhere indicates brazen negligence on the part of everyone responsible for the data. Everyone who had a part in designing the system and managing the system is culpable. The most ridiculous part of this is the over-reliance on server-side data storage by the sidekick designers.

    • by 1s44c ( 552956 ) on Sunday October 11, 2009 @10:54AM (#29711391)

      But the fact that there is no backup anywhere indicates brazen negligence on the part of everyone responsible for the data. Everyone who had a part in designing the system and managing the system is culpable. The most ridiculous part of this is the over-reliance on server-side data storage by the sidekick designers.

      I will bet you there were good people -SCREAMING- to fix the backups, implement and test failover and all sorts of other good things. In my experience things like this are due to management refusing to spend money fixing problems that have not lost customers yet.

  • by AHuxley ( 892839 ) on Sunday October 11, 2009 @04:58AM (#29709849) Journal
    Right feature, wrong server? MS understands the need for a "Rose Mary Stretch" default setting.
    The congress critters have learned a lot from the "terrible mistake" of email backups.
    From cute page boys to Iran contra, MS can market this as a feature.
  • DIY phone backups (Score:4, Informative)

    by golfnomad ( 1442971 ) on Sunday October 11, 2009 @05:03AM (#29709869)
    There are 3rd party apps out there that will let you "backup" your phone data yourself. I personally use a program called bitpim www.bitpim.org (make sure you d/l latest version). It works with many different phone models and I have used it several times to "restore" my phone data (had 2 phones with hardware issues). It restored my calendar, notes, phone book and rings tones (that last one can save you d/l $$$). It is easy enough to install and use, you do not have to be a total geek to make it functional (but having one available to help you set up backups would probably help). Been working in the IT industry too long to rely on someone else backing up my data for me, and I will not encourage Murphy to have a party in my honor!
  • WTF (Score:5, Insightful)

    by ShooterNeo ( 555040 ) on Sunday October 11, 2009 @05:08AM (#29709883)

    This is unbelievably bad. The real problem is : why aren't there incremental off site backups to another server farm? A weekly binary difference snapshot would have made this failure less catastrophic.

    Ultimately, with a complex application like this, you can't guarantee 100% that the code doesn't have a bug in it that could result in loss of user data. You can be ALMOST sure it won't, but 100% is not possible with current analysis techniques. (even a mathematical proof of correctness wouldn't protect you from a hacker)

    But a properly done set of OFFLINE backups, stored on racks of tapes or hard disks in a separate physical facility : you can be pretty sure that data isn't going anywhere.

    • Huh? (Score:3, Insightful)

      by msauve ( 701917 )
      "incremental..."weekly binary difference"

      Uh, those would do nothing in this case, where it appears the entire DB has been lost. You need a regular full backup, or diffs and incrementals are just cruft. It appears they don't even have that, since there's no talk of restoring to month (or ?) old data.
      • Re: (Score:3, Interesting)

        by kobaz ( 107760 )

        "incremental..."weekly binary difference"

        Uh, those would do nothing in this case,

        I agree. Weekly? WEEKLY?!!! What is this... 1980? Hell even in 1980 people with critical data on their apple2 spreadsheet kept more than one copy of their data on a daily basis.

        I'm not sure why, but one of our customers had a backup daemon running with just incrementals being done. There was one full backup done two years ago and an incremental every night. Well.. they had a computer fry one weekend. It was a crappy windows backup program with only a point and click interface. No way in hell am I go

    • Re:WTF (Score:5, Interesting)

      by Locutus ( 9039 ) on Sunday October 11, 2009 @10:02AM (#29711119)
      from that sounds of it, Microsoft couldn't turn Danger into a WinMo platform so they gutted it of employees instead of spinning it back off since they'd rather have it dead than spreading more Java but not dead before they had Pink out the door. So when you fire everyone from the top downward, you end up with people who's job is to turn the lights off when the doors get locked for good. they're not motivated much nor are they skilled in all of what used to be required to run the shop. Auto-pilot mode comes to mind.

      So maybe the backup system needed to be checked or a CRON job verified or maybe the computer in Joe Fired's office was part of the backup process in some little way but important enough that the whole job was failing every night.

      As I said, Microsoft tried to replace the Danger stack with Microsoft software but it wasn't going to work or got too much backtalk( thinking of Softimage ) and threats of everyone leaving if they had to port to the WiMo pile/stack. They moved anyone who'd go, over to Pink and left the rest to keep life support systems running. oops, they failed.

      With Ballmer publicly saying that WinMo has been a failure, he's hearing the press say WinMo 6.5 is a yawn and expectations are that the Sony PS3 will eclipse MS XBox, and recently reading about how he's telling people that IBM doesn't know what they are doing....There's probably a new monkey-boy dance going on inside his office we'd probably love to see. It might be too dangerous being so close as to record it.

      Will Microsoft ever make any profits from anything outside of MS Windows and MS Office? Ballmers 8-Ball still seems to be telling him something very different from what everyone else is seeing.

      LoB
  • Forget all the speculation and semi-random after-the-fact suggestions, I am waiting for the write-up to discover how this monumental cock-up occurred. I hope I don't just learn that 'backups would have been a good idea'.

    • He also hopes that you are not going to learn only now, that backups would have been a good idea.

      You SHOULD have said, I hope THEY don't just learn, 'backups would have been a good idea.'

      Your boss again, this is what your meant right?

      • Erm, not quite.

        I was stating that if I read the report I hope I don't just learn that MS/Danger concluded that backups would have been a good idea.

        FWIW: Our corporate backup strategy (for which I am responsible) comprises a mesh of servers across some of our sites (we have 35) that run daily backups, syncing data sets between sites and providing a three-tier level of daily, weekly and monthly snapshots. I can restore any single file back to its state within the last 90 days (more if needed) at the click of

  • Bad brand (Score:2, Funny)

    by MM-tng ( 585125 )

    It's like being kicked in the side.

  • All your data are lost by us.
  • RIP Sidekick (Score:5, Insightful)

    by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Sunday October 11, 2009 @06:13AM (#29710145) Homepage Journal

    With all the competition in the smartphone market today, this is probably an unrecoverable error. If they manage to recover the data then they will come off as heroes for having the courage to tell their customers promptly. Otherwise they just look like they are: incompetent. No great loss, though.

  • by MrCrassic ( 994046 ) <deprecated@@@ema...il> on Sunday October 11, 2009 @06:25AM (#29710175) Journal

    HOW THE HELL DO THEY NOT HAVE OFF-SITE TAPE BACKUPS????

    So essentially, everybody's Sidekick backup data, which is apparently critical should they ever lose power, was all concentrated on A SINGLE SERVER? I hope they at least say their tape backups caught fire and their replicated server died on the same day too...

    Their retentions lines are going to be hot this Columbus Day weekend! The iPhone is getting cheaper...

    • Forgot to mention that a supporting reason for why T-Mobile will deal with cancellations left and right for a little while is because tons of people hate the Sidekick anyway, and this EPIC FAIL is an EPIC excuse to jump ship right now.

    • Re: (Score:3, Insightful)

      by AHuxley ( 892839 )
      "Back him up, boys!"
      T-Mobile says, "but I thought you were going to back us up!"
      Robbie says, "We didn't get rich buying a lot of servers, you know!"
  • by HonestButCurious ( 1306021 ) on Sunday October 11, 2009 @06:32AM (#29710195) Journal

    According to a very long article on AppleInsider:
    http://www.appleinsider.com/articles/09/10/09/exclusive_pink_danger_leaks_from_microsofts_windows_phone.html&page=3 [appleinsider.com]

    MS was misleading T-Mobile about the state of Sidekick support, and apparently charging hundreds of millions every year for, and I quote "a handful of people in Palo Alto managing some contractors in Romania, Ukraine, etc". This is apparently because most of the Sidekick devs had either moved to Pink or quit out of disgust.

  • Interesting article about the Microsoft/Pink/Danger/Sidekick [roughlydrafted.com] relationship and leaks indicating that Microsoft are trying to kill Sidekick without telling the partners. Microsoft would never do such a thing of course ...

    Rich.

  • Yesterday,
    All those backups seemed a waste of pay.
    Now my database has gone away.
    Oh I believe in yesterday.

    Suddenly,
    There's not half the files there used to be,
    And there's a milestone hanging over me
    The system crashed so suddenly.

    I pushed something wrong
    What it was I could not say.
    Now all my data's gone and I long for yesterday-ay-ay-ay.

    Yesterday,
    Need for backup seemed so far away.
    Seemed my data were all here to stay,
    Now I believe in yesterday.

    Anonymous

  • Cloud computing?

    That ain't no cloud. That's the fog obscuring the view of sanity.

    IT has been trying this crap ever since the emergence of personal computers.

  • Clerk: Danger Powers personal effects [shows box of off-site tapes and such]
    MS: Actually my name is Microsoft Powers...
    Clerk: It says here - name: Danger Powers
    MS: No no no no no... Danger is my middle name
    Clerk: Okay, Microsoft Danger Powers...
  • So is Ballmer tossing chairs about?? I think not. Probably sitting back with a smile on his face.
  • The Tao of Backup (Score:5, Interesting)

    by ei4anb ( 625481 ) on Sunday October 11, 2009 @08:05AM (#29710603)
    Sadly it comes to pass that every generation the Tao of Backup is forgotten and must be relearned through such trial by fire. http://www.taobackup.com/ [taobackup.com]
  • by cshbell ( 931989 ) on Sunday October 11, 2009 @09:08AM (#29710881)
    According to this comment post [engadget.com] on Engadget, it was a contractor working for Danger/Microsoft who screwed up a SAN upgrade and caused the data loss. Obviously, take this with a grain of salt until it's substantiated:

    "I've been getting the straight dope from the inside on this. Let me assure you, your data IS gone. Currently MS is trying to get the devices to sync the data they have back to the service as a form of recovery.

    It's not a server failure. They were upgrading their SAN, and they outsourced it to a Hitachi consulting firm. There was room for a backup of the data on the SAN, but they didn't do it (some say they started it but didn't wait for it to complete). They upgraded the SAN, screwed it up and lost all the data.

    All the apps in the developer store are gone too.

    This is surely the end of Danger. I only hope it's the end of those involved who screwed this up and the MS folks who laid off and drove out anyone at Danger who knew what they were doing.

    "Epic fail" doesn't begin to describe this one.

Dennis Ritchie is twice as bright as Steve Jobs, and only half wrong. -- Jim Gettys

Working...