Server Failure Destroys Sidekick Users' Backup Data 304
Expanding on the T-Mobile data loss mentioned in an update to an earlier story, reader stigmato writes "T-Mobile's popular Sidekick brand of devices and their users are facing a data loss crisis. According to the T-Mobile community forums, Microsoft/Danger has suffered a catastrophic server failure that has resulted in the loss of all personal data not stored on the phones. They are advising users not to turn off their phones, reset them or let the batteries die in them for fear of losing what data remains on the devices. Microsoft/Danger has stated that they cannot recover the data but are still trying. Already people are clamoring for a lawsuit. Should we continue to trust cloud computing content providers with our personal information? Perhaps they should have used ZFS or btrfs for their servers."
As if millions... (Score:5, Funny)
homemade cell phone porn videos cried out and then were silenced.
"they should have used ZFS or btrfs" (Score:5, Insightful)
This seems a rather silly point to make. I know this is Slashdot and we have to suggest Open Source alternatives but throwing out random file systems as a suggestion to fix poor management and HARDWARE issues is some place between ignorant and silly.
Perhaps they should have had at least mirrored or stripped raid, with an off-site backup every week or so?
Re:"they should have used ZFS or btrfs" (Score:5, Insightful)
i'd hazard a guess that the offsite backups were corrupted as well somehow or were silently failing.
Re:"they should have used ZFS or btrfs" (Score:5, Insightful)
Exactly, this can be a software bug too and that could possibly easily destroy or corrupt backup data too. I really doubt this service was ran without backups.
The type of filesystem has nothing to do with this.
Re:"they should have used ZFS or btrfs" (Score:5, Insightful)
I really doubt this service was ran without backups.
Knowing 'enterprise' backups I'd bet there was at least a backup client installed and running. However, I'm equally sure that the backups were, at best, tested once in a disaster recovery exercise and were otherwise never verified.
Further, responsibility would probably be shared between a storage department, a server operations department and an application management department, neatly ensuring that no single person or function is in the position to even know what data is supposed to be backed up, what limitations there are to ensure consistency (cold/hot/inc/etc), to monitor that that's actually what does happen and that it keeps happening as the application and server configuration evolves.
Backups of dubious value do not seem to be a rarity in enterprise settings.
Re:"they should have used ZFS or btrfs" (Score:5, Interesting)
Dubious backups? Depends. We had a system which was a 6TB cluster that was notoriously difficult to back up. This went on for years, it took too long, failures caused issues downstream etc. Then someone took a moment to realise that the application was not capable of re-using that 6Tb of data if it was restored - once the data came in it was processed and archived. To recover the application all they had to do was backup a few gig of config and binaries, and restart slurping data from upstream again. Viola - backup stripped down to nothing, 6TB a day of data less to backup, and next to no failures as it was now so quick to backup.
Then there is the case of an application which the vendor and application developer signed off on using a backup solution using a daily BCV snapshot. What they failed to tell us was application not only held data in a database, but in a 6G binary blob file buried deep in the application filesystem. If the database and the binary where out of sync in any way, it could mean missed or replayed transactions or generally that the application was inconsistant. As this was an order management platform, that was bad. You can guess the day we found out about this dependancy.... yup, data corruption, bad vendor advice screwed the binary file and all we had to go on was a backup some 23 hours old where the database was backed up an hour after the application. Because of a corresponding database SNAFU, the recover point was actually another day before that, with the database having to be rolled forward. It was at this point we found out the despite the signed off backup solution, the vendors documented recommendations (that were not supplied to us) was that the only good backup was a cold application one - not possible on a core order platform. Thankfully after some 56 hours of solid work the application vendor managed to help sort the issue out and the restore from backup was not actually needed. The backups were never really tested as the DR solution worked on SRDF - the DR consideration for data corruption was never really part of the design (from a very high level, not just this platform).
So there you have it. Two dubious Enterprise backups - one not needed, the other not usable.
Re: (Score:2, Insightful)
When I read that you had quoted "I really doubt this service was ran without backups," I twitched and the thought
I know it's bad grammar, but let's just ignore it, please
was loud in my ears. I was so relieved when I saw that you weren't mentioning it. I don't know what this makes me, but it happens all the time. I'm definitely bothered by poor grammar and spelling, but I want no one to ever point it out.
Re:"they should have used ZFS or btrfs" (Score:4, Informative)
I used to have one of these things.
The phone is (like someone above pointed out) a local cache of what's on the server side. The live database/back end is what crashed. When you make a change on the phone, it immediately sends that change to the server. You can login to the sidekick web site and make changes there, which appear quickly on your phone. If you reboot your phone, it will retrieve anything it needs from the server side. Apparently, the phone doesn't even keep a permanent local copy on some sort of non-volatile storage (hence "Don't turn off your phone.")
It's like someone that uses Google apps and stores all their documents on their system. If that system should go down, you'd be screwed, except that you COULD back up your documents locally. With this case, you can not.
I don't really like the term "cloud computing." All it means is server storage somewhere on the Internet. Under this term you could call any web site a "Cloud." It's ambiguous at best.
Re:"they should have used ZFS or btrfs" (Score:4, Funny)
Re: (Score:3, Insightful)
I've had something like that happen. The recovery system for a partner had never been tested with a _full_ recovery, only with recovering a few selected files. But because someone decided to get cute with the backup system to pick and choose which targets got backed up, individual directories each got their own backup target. Thousands and thousands of them. And the backup system had a single tape drive, not a changer.
The result was that to restore the filesystem, the tapes had to be swapped in and out to g
Re: (Score:3, Funny)
Something tells me you have grey hair and wrinkles. And I say that in a good way.
Re:"they should have used ZFS or btrfs" (Score:5, Funny)
It's not the gray hair (or what is left of it!), and those aren't wrinkles. They're laugh lines from the terrific amusement when some youngster ignores the hard-won lessons of the last millennium, especially when they have to call me or someone like me to clean up the mess. The laugh lines are especially deep from when I collected a paper trail to show where their supervisor ignored my written warnings about the danger: those are used with caution, but can be very, very handy.
Re: (Score:3, Interesting)
Then again, good back-up policy predates computers. If Microsoft/Danger had the same dedication to backups of valuable documents as monasteries did back in the 1000s, this sort of mess wouldn't have happened.
Re:"they should have used ZFS or btrfs" (Score:4, Interesting)
The kind of filesystem have help - I'm familiar with ZFS concepts so I'll stick to those:
In ZFS when you write to a file you don't write over the pre-exisiting data, you write elsewhere then that gets mapped in upon success, the old data is still there and you can see the aged mapping (you know what was there). Now you can at this point recycle this space. However, you can switch this pruning off, now you have a complete record of everything that was ever done on the disk. To stop it ever running out of space I can either: Add disks to the disk-pool to stop that, or prune very old data (older than a give age - maybe 6 months?).
So it helps.
Re:"they should have used ZFS or btrfs" (Score:5, Interesting)
One reason why our corporate policy is that we actually have to validate backups for every system on a regular basis (this means doing a full restore of a tape called from off-site), where the regularity is directly proportional to the criticality of the system. The more critical, the more often we test. On our iSeries, they restore the weekly backup tape EVERY week on the QA server - both for the purposes of refreshing it, AND to validate the backups. We also have a quarterly 'random' test where a system is chosen randomly and it must be recovered from bare metal using only our standard procedures + the backup tape.
We've discovered all kinds of strangeness with backup tapes through the years. Our Tier 1 systems have completely separate instances in geographically diverse areas, with data-replication.
Granted, this isn't cheap, but our data isn't either.
Re: (Score:2)
Re:"they should have used ZFS or btrfs" (Score:5, Informative)
I've always been amazed that tape is trusted as much as it is. It seem (anecdotally at least) to have a disproportionately high failure rate.
I'm not sure that's the problem so much - after all, LTO has a read head positioned directly after the write head and automatically verifies as it goes along. A tape error is dead easy to spot.
There are a number of places where things can fall apart, and tapes don't even need to come into the matter:
Re: (Score:3, Informative)
I found one of these when doing a backup/restore to upgrade a server (backup the data from ServerA and restore the data on ServerB). I
The value of data (Score:5, Insightful)
Granted, this isn't cheap, but our data isn't either.
Microsoft bought Danger for half a billion dollars. Current estimates of the value of this data are roughly... half a billion dollars, plus a little. There's little doubt that in addition to destroying the entire value of the acquisition they've created a connection between "Microsoft", "Danger" and "data loss". In their release T-Mobile isn't being shy about tying those things together. Not good. That's going to have impacts even for some completely unrelated cloud-based products like Azure [microsoft.com].
Somebody's about to get a really awkward performance review.
Re: (Score:3, Funny)
I don't know where you hang out at night, but where I hang out people who call themselves things like "webmistressrachel" are not men.
Like I said, your mileage may vary..
Re:"they should have used ZFS or btrfs" (Score:5, Funny)
There are plausible reports as to how this happened here [hiptop3.com].
tl;dr - They tried upgrading their SAN without making a backup first, and the upgrade somehow hosed the entire SAN.
Re:"they should have used ZFS or btrfs" (Score:5, Interesting)
There are plausible reports as to how this happened here [hiptop3.com].
tl;dr - They tried upgrading their SAN without making a backup first, and the upgrade somehow hosed the entire SAN.
That's the thing that has always worried me most about SANs: you have all your eggs in one basket. No matter how redundant or reliable the hardware is, one bad update or trigger-happy admin can cause the instant loss of all your data. That's only slightly better than having your data center burn down. You still have your hardware, but a total restore like that can be a nightmare. I've heard somewhere that 80% of corporations couldn't recover from a scenario like that.
Here's some fun numbers: a typical tape restore runs at something like 70MB/sec, if you're lucky, per tape drive. Some small low-end SANs that I see people buying these days are 10TB or bigger. At those speeds, it takes 40 hours to restore the complete system. What's worse is that it doesn't scale all that well either, you can get more drives, but the storage controllers and back-end FC loops become a limit. If you have some big cloud provider scenario, a complete restore could take days, or even weeks.
What's scary is that mirroring or off-site replicas don't help. If your array starts writing bad blocks, those will get mirrored also.
Re:"they should have used ZFS or btrfs" (Score:5, Interesting)
Its amazing a consumer serving company like T-Mobile risked itself by hosting their application on Microsoft platform;. Furthermore where is the DR in all this? Who the F*ck in the right mind fiddle something on SAN without confirming a full backup of all applications/databases? It appears that Hitachi and Microsoft are at fault here (if SAN maintenance is the root cause of this failure) but T-Mobile is the fool allowing these companies to ruin their data. Not only there won't be any consequences because of this issue to MS or Hitachi - T-Mobile will be pouring in more money to fly in the MS and Hitachi consultants.
Re: (Score:3, Informative)
"Who the F*ck in the right mind fiddle something on SAN without confirming a full backup of all applications/databases?
people who drink the kool-aid whenever vendors of said products repeatedly swear up and down all their tasks/patching/operations are 'totally no-impact and no-visibility changes.' combine that with people unwilling to take downtime or spend $$$ to properly protect the contents ahead of time and you have just cooked a recipe for disaster.
-r (not speaking from personal experience.. of course.. :/ )
Re: (Score:3, Informative)
You assume Danger used a MSFT platform (Score:4, Insightful)
Looking at the timeframe that Danger was acquired by MSFT and that the Danger OS was likely based on NetBSD (http://en.wikipedia.org/wiki/Danger_Hiptop), it's more likely that Danger was still using NetBSD as their Server Software and this was merely a process issue. Blaming it on the "Microsoft Platform" without any real data is just spreading FUD.
Re: (Score:3, Insightful)
You assure us anonymously without any proof? Of course.
Re: (Score:3, Funny)
He's modded +1 Informative. I guess that's proof enough! :D
Re: (Score:3, Interesting)
In our environment, a large government shop, our data volumes are capped at around 1 TB of storage for that very reason. Between the SAN, and the tape backups...they just simply have to create a physical cutoff point for data storage due to those onerous recovery periods.
There is nothing wrong in our shop with having TWO 1 TB volumes, but you will never get approved to have one single 2TB. Problem solved...at least for file storage. Database backups are managed via other mechanisms like replication.
Re: (Score:3, Informative)
Even with a SAN you need to limit volumes sizes to whatever size you can restore within the acceptable restoration window. There are also those times where you just want to run a chkdsk and if the volume is too big, it takes too long.
That being said, I can't believe they didn't have any backup. Even if they skipped the pre-upgrade backup, they should have had one from last night/week/month. Any of those options would be better than nothing. I have to assume they were doing backup to disk on the same SAN
I work in telecom - Sr Tech Arch (Score:3, Interesting)
I work in telecom at a different provider. SAN upgrades are performed by the SAN vendor and, IME, they always demand a complete backup prior to starting any work unless the customer demands otherwise. If the customer doesn't want the backup, we always had to get a Sr VP to sign off. There were about 10 Sr VPs in the company - not like at a bank where everyone is a VP.
Usually, we would perform firmware upgrades only when migrating from old SAN equipment into new. The old equipment would be upgraded and used
Re: (Score:3, Insightful)
Repeat after me, you haven't got backups unless you've tested RESTORES.
Re:"they should have used ZFS or btrfs" (Score:5, Insightful)
The point about backups is not to tick the box saying "taken backup?" but to provide your business / customers / whatever with a reliable last resort for restoring almost all their data. If you don't have 100% certainty that it will work, you don't have a backup.
Re: (Score:3, Interesting)
It's not a backup unless you can prove it will restore. Until then it's just a waste of tape, or disk, and time
True. There's a similar problem in biological research, where people think they have secured frozen samples but they haven't tested whether the samples are valuable after thawing. For example, frozen cells might not be viable, or RNA might be degraded. Too often the samples are just wasting freezer space. Anybody can freeze (or backup), the question is whether what you thaw (restore) is valuable.
Autorestore - multiple birds one stone. (Score:3, Insightful)
To the standby or testing system. Our staging/testing systems all run yesterday's production data, restored from the most recent backup.
if your backups don't work then neither will your test/staging server... Which will be noticed.
What do you get?
* Backups tested every day.
* A test/staging/standby system identical to the production.
* Something the business can run all the crappy queries they like against without affecting the production system.
Re: (Score:2)
What's really retarded is that using zfs would prevent bitrot and warn you of impending or intermittent hardware failures but is seen as OSS zealotry by people who haven't thought out the problem.
Re: (Score:2)
I wouldn't say that, but ZFS is still a little young for my liking. There are plenty of horror stories concerning data loss, and more to the point plenty of recent horror stories.
Re: (Score:3, Interesting)
Re:"they should have used ZFS or btrfs" (Score:4, Informative)
This seems a rather silly point to make. I know this is Slashdot and we have to suggest Open Source alternatives but throwing out random file systems as a suggestion to fix poor management and HARDWARE issues is some place between ignorant and silly.
Not as silly as it might appear. One of ZFS's main functions is that it can compensate for some degree of hardware failure.
Re:"they should have used ZFS or btrfs" (Score:5, Interesting)
What if the data was on an EMC storage array and the tech told them its all lost? What if your dealing with a Teir 1 vender (I am looking at you Dell Equallogic) that swears UP and DOWN that there is no way to recover the system after a second drive out of a RAID 5 has been pulled? Hell, try just a standard raid 5 card from a Teir 1 vender. (Not talking about calling like 3ware support directly, they are honestly good and recovered a few arrays with them)
I "suspect" that they are running it off a storage array that failed big time, or lost the LUN, or just someone decided to die and take the server with it. There is just to much we don't know. Was Dagger installed on multi-servers? Was it clustered? Is it a cloud system? Does it run its own storage system or requires additional hardware?
But you know what? ZFS, EMC even Windows 2008, All moot. Why? WHERE ARE THE TAPE BACKUPS?!?! SERIOUSLY. The ONLY way they have lost ALL that data was that they didn't have backup solution. Otherwise their "press release" would say "...however we will be restoring the data from last week/months tapes..."
I do like how they keep saying "Microsoft/Danger" as if they are at fault. A good admin would expect a new car would catch fire and run into a bus full of nuns.
Re: (Score:3, Interesting)
Well the first problem was the EMC storage array.
The second problem is believing the tech when he says the data cannot be reclaimed.
The third problem is using a simple raid 5 volume on a great deal of data. Multiple drives fail all the time! Hell, racks of servers fail in unison.
Even if the DCB data is corrupted this can be corrected even on a large SAN.
All or part of the data is generally recoverable.
Either this was an impossibly horribly managed install or something very complex has happened. Generally, t
Re: (Score:2)
The problem is not how to compensate for "some degree of hardware failure", but how to avoid any data loss. I believe the answer is `keep full backups` and you can do this perfectly well on FAT32.
Even with full backups, you'll still lose the data you had between the last backup and the failure event
Re:"they should have used ZFS or btrfs" (Score:5, Informative)
Re:"they should have used ZFS or btrfs" (Score:5, Informative)
undelete (not de-corrupt) (Score:2)
Have ZFS/btrfs developed tools to undelete or rescue files? It is pretty hopeless for ext[234] in my experience.
Re: (Score:3, Informative)
Yes, it's called a snapshot. Take a snapshot and you can either roll the entire system back to that point in time, or just browse its contents and extract the files you want.
Re: (Score:2)
Err... This is cloud computing, right? Why do you have off-site backups every week or so?! The data should be stored in multiple geographic locations ALL THE TIME. The ZFS suggestion isn't as dumb as you might think, you tell ZFS not to prune old data, then if stuff gets "deleted" it's still on the disk (I won't bore you with an explanation here). You're right ZFS won't help you against something that destroys (physically) the disks (so multiple locations are required) but it will help you against hacking o
A server failure? (Score:4, Informative)
A server failure caused all of the data to be lost?
No backups? Not even a spare server with a mirror of the data? No servers in different places? No off-site backup strategy?
As an aside, why would that data be stored in volatile non-battery backed up ram? All of my graphing calculators have a special battery to keep the ram, and they aren't even supposed to store important stuff. Flash is cheap enough these days, why should simply removing the battery cause important data to be lost?
Re:A server failure? (Score:4, Insightful)
Reportedly sidekicks are thin clients, other than making phone calls, everything on the phone is saved on the server side. Which is a special kind of retarded, in today's world where a blackberry performs all the same functions, and provides a local backup feature. But yeah as for the backups, all your backups are worthless if your data backup code is flawed, and nobody ever checks the backup tapes. When MS bought the service, they probably changed the location the servers were in, plugged everything back in, and kept going. I imagine a project like that would be on a short timetable, and "checking to see that the backup tapes are really being backed up to" is low on the priority list when the service is already live.
Re:A server failure? (Score:5, Informative)
There's some interesting background leaks on the takeover of Danger in this article [appleinsider.com] which seem to imply they cut a lot of staff, and gutted the company, which is now running on a skeleton staff. So I guess it's not too surprising when this sort of mistake is made. Not the most reliable source, but they did definitely cut a lot of danger staff after the acquisition.
Re:A server failure? (Score:4, Funny)
LoB
Thin client: Android, too? (Score:3, Insightful)
Reportedly sidekicks are thin clients, other than making phone calls, everything on the phone is saved on the server side. Which is a special kind of retarded
Isn't that also how Android works?
I mean sure, the apps and such are on internal flash, but it's a different story for your "important" data such as email or contacts list. Heck, as I've learned, one can't even read one's existing ("synced") email without a working web connection. How they can call that "syncing", and what it's doing besides simple header indexing, is beyond me.
This is another reason I am loath to trust "the cloud" -- if I know I can be self-sufficient (in a data accessibility context), tha
Re: (Score:2)
Isn't that also how Android works?
No.
Re:Thin client: Android, too? (Score:5, Informative)
Re: (Score:3, Informative)
Re:A server failure? (Score:5, Funny)
A server failure caused all of the data to be lost?
Maybe it was the server failure . . . maybe they only had one . . . ?
What about the backups? (Score:4, Interesting)
Sidekick (Score:5, Funny)
shit, is that TSR still hanging around? goodness!
If the above means anything to you, "apt-get install joe mc" will make you smile as well.
Re: (Score:3, Informative)
Ohh yes.. Need an ASCII table? It's just a Ctrl-Alt away
Means what it said (Score:3, Informative)
shit, is that TSR still hanging around? goodness!
Dude, what part of "Stay Resident" did you not understand. It's not like selling your computer rids you of it.
That's why I never ran them, nor consorted with Deamons.
Backups? (Score:3, Interesting)
Either this is a really, really serious meltdown which completely killed not only the server but all their backups as well (and what're the chances of that?), or their IT guys have been really, really slack and just didn't make any backups...
Guess they should have used a better smartphone, like *anything* else on the market... Even the cloud-centric Pre will still work if you don't have access to the Cloud - even if Google and/or Palm dies, you'll still have all your information on your phone! Jesus... Doesn't inspire confidence...
Re:Backups? (Score:5, Insightful)
Or this was really a software error, and the backup servers in an other datacenter, just copied the faulty data/delete command.
They should really be far to big to have all their data stored in a single datacenter with no offsite backup. (Or they should have an entry on thedailywtf.com)
Re: (Score:2)
An article linked to above suggested the cause was a firmware upgrade failure on a HDS array - sounds like maybe it lost the config or did something nasty during the upgrade. At any rate the core question is where is the backup tape?
Microsoft/Danger (Score:3, Funny)
It's The Backups Stooped (Score:5, Insightful)
This is an issue of irresponsibility. Plain and Simple. The company responsible for maintaining the data should -- at the very least -- have had some full system backup from last month. If they had some old backup somewhere at least you could chalk it up to systems failure or bad backup tape or bad admin or something.
But the fact that there is no backup anywhere indicates brazen negligence on the part of everyone responsible for the data. Everyone who had a part in designing the system and managing the system is culpable. The most ridiculous part of this is the over-reliance on server-side data storage by the sidekick designers.
Re:It's The Backups Stooped (Score:5, Insightful)
But the fact that there is no backup anywhere indicates brazen negligence on the part of everyone responsible for the data. Everyone who had a part in designing the system and managing the system is culpable. The most ridiculous part of this is the over-reliance on server-side data storage by the sidekick designers.
I will bet you there were good people -SCREAMING- to fix the backups, implement and test failover and all sorts of other good things. In my experience things like this are due to management refusing to spend money fixing problems that have not lost customers yet.
Microsoft was testing the US gov edition (Score:5, Funny)
The congress critters have learned a lot from the "terrible mistake" of email backups.
From cute page boys to Iran contra, MS can market this as a feature.
DIY phone backups (Score:4, Informative)
Re: (Score:2)
Really? Mine's grape, and i use itunes.
Seriously, even ActiveSync looks good now.
Re: (Score:2)
The Sidekick saves everything server side. Other than making phonecalls, it's a thinclient.
WTF (Score:5, Insightful)
This is unbelievably bad. The real problem is : why aren't there incremental off site backups to another server farm? A weekly binary difference snapshot would have made this failure less catastrophic.
Ultimately, with a complex application like this, you can't guarantee 100% that the code doesn't have a bug in it that could result in loss of user data. You can be ALMOST sure it won't, but 100% is not possible with current analysis techniques. (even a mathematical proof of correctness wouldn't protect you from a hacker)
But a properly done set of OFFLINE backups, stored on racks of tapes or hard disks in a separate physical facility : you can be pretty sure that data isn't going anywhere.
Huh? (Score:3, Insightful)
Uh, those would do nothing in this case, where it appears the entire DB has been lost. You need a regular full backup, or diffs and incrementals are just cruft. It appears they don't even have that, since there's no talk of restoring to month (or ?) old data.
Re: (Score:3, Interesting)
"incremental..."weekly binary difference"
Uh, those would do nothing in this case,
I agree. Weekly? WEEKLY?!!! What is this... 1980? Hell even in 1980 people with critical data on their apple2 spreadsheet kept more than one copy of their data on a daily basis.
I'm not sure why, but one of our customers had a backup daemon running with just incrementals being done. There was one full backup done two years ago and an incremental every night. Well.. they had a computer fry one weekend. It was a crappy windows backup program with only a point and click interface. No way in hell am I go
Re:WTF (Score:5, Interesting)
So maybe the backup system needed to be checked or a CRON job verified or maybe the computer in Joe Fired's office was part of the backup process in some little way but important enough that the whole job was failing every night.
As I said, Microsoft tried to replace the Danger stack with Microsoft software but it wasn't going to work or got too much backtalk( thinking of Softimage ) and threats of everyone leaving if they had to port to the WiMo pile/stack. They moved anyone who'd go, over to Pink and left the rest to keep life support systems running. oops, they failed.
With Ballmer publicly saying that WinMo has been a failure, he's hearing the press say WinMo 6.5 is a yawn and expectations are that the Sony PS3 will eclipse MS XBox, and recently reading about how he's telling people that IBM doesn't know what they are doing....There's probably a new monkey-boy dance going on inside his office we'd probably love to see. It might be too dangerous being so close as to record it.
Will Microsoft ever make any profits from anything outside of MS Windows and MS Office? Ballmers 8-Ball still seems to be telling him something very different from what everyone else is seeing.
LoB
Some reading (Score:2)
Forget all the speculation and semi-random after-the-fact suggestions, I am waiting for the write-up to discover how this monumental cock-up occurred. I hope I don't just learn that 'backups would have been a good idea'.
Your boss (Score:2)
He also hopes that you are not going to learn only now, that backups would have been a good idea.
You SHOULD have said, I hope THEY don't just learn, 'backups would have been a good idea.'
Your boss again, this is what your meant right?
Re: (Score:2)
Erm, not quite.
I was stating that if I read the report I hope I don't just learn that MS/Danger concluded that backups would have been a good idea.
FWIW: Our corporate backup strategy (for which I am responsible) comprises a mesh of servers across some of our sites (we have 35) that run daily backups, syncing data sets between sites and providing a three-tier level of daily, weekly and monthly snapshots. I can restore any single file back to its state within the last 90 days (more if needed) at the click of
Bad brand (Score:2, Funny)
It's like being kicked in the side.
T-Mobile Press Release (Score:2, Funny)
The clue is in the name of the software (Score:2, Funny)
RIP Sidekick (Score:5, Insightful)
With all the competition in the smartphone market today, this is probably an unrecoverable error. If they manage to recover the data then they will come off as heroes for having the courage to tell their customers promptly. Otherwise they just look like they are: incompetent. No great loss, though.
Irresponsibility to EPIC proportions. (Score:3, Insightful)
HOW THE HELL DO THEY NOT HAVE OFF-SITE TAPE BACKUPS????
So essentially, everybody's Sidekick backup data, which is apparently critical should they ever lose power, was all concentrated on A SINGLE SERVER? I hope they at least say their tape backups caught fire and their replicated server died on the same day too...
Their retentions lines are going to be hot this Columbus Day weekend! The iPhone is getting cheaper...
Re: (Score:2)
Forgot to mention that a supporting reason for why T-Mobile will deal with cancellations left and right for a little while is because tons of people hate the Sidekick anyway, and this EPIC FAIL is an EPIC excuse to jump ship right now.
Re: (Score:3, Insightful)
T-Mobile says, "but I thought you were going to back us up!"
Robbie says, "We didn't get rich buying a lot of servers, you know!"
Re:Irresponsibility to EPIC proportions. -- yes (Score:4, Interesting)
A) The Sidekick apparently doesn't store anything, so customers can't make backups that easily, even if they wanted to, and
B) Danger designed this phone to store everything server-side. It is incomprehensibly foolish to not include a SUPER SOLID backup strategy as well. This problem has been ongoing for several days now; I don't know if the data was fine on the onset of this problem, but the infuriated customers have all the right to demand everything AND the kitchen sink for losing practically everything they had.
This may have to do with the "Pink" project fiasco (Score:5, Interesting)
According to a very long article on AppleInsider:
http://www.appleinsider.com/articles/09/10/09/exclusive_pink_danger_leaks_from_microsofts_windows_phone.html&page=3 [appleinsider.com]
MS was misleading T-Mobile about the state of Sidekick support, and apparently charging hundreds of millions every year for, and I quote "a handful of people in Palo Alto managing some contractors in Romania, Ukraine, etc". This is apparently because most of the Sidekick devs had either moved to Pink or quit out of disgust.
It is an ancient story, endlessly repeated (Score:5, Informative)
It is development dome.
Two companies enter, MS comes out, slightly fatter.
If you do business with MS, you are riding a tiger with the brains to realize that lunch is only a roll on the ground away.
MS really should be renamed to BubbaSoft. Get into the shower with BubbaSoft and you know what is going to happen.
Re:It is an ancient story, endlessly repeated (Score:5, Funny)
Just don't drop the SOAP.
What do you expect with a name like (Score:2, Funny)
Interesting article about Pink/Danger/Sidekick (Score:5, Interesting)
Interesting article about the Microsoft/Pink/Danger/Sidekick [roughlydrafted.com] relationship and leaks indicating that Microsoft are trying to kill Sidekick without telling the partners. Microsoft would never do such a thing of course ...
Rich.
Yesterday... all those backups seemed a waste... (Score:5, Funny)
Yesterday,
All those backups seemed a waste of pay.
Now my database has gone away.
Oh I believe in yesterday.
Suddenly,
There's not half the files there used to be,
And there's a milestone hanging over me
The system crashed so suddenly.
I pushed something wrong
What it was I could not say.
Now all my data's gone and I long for yesterday-ay-ay-ay.
Yesterday,
Need for backup seemed so far away.
Seemed my data were all here to stay,
Now I believe in yesterday.
Anonymous
Foggy idea? (Score:2)
Cloud computing?
That ain't no cloud. That's the fog obscuring the view of sanity.
IT has been trying this crap ever since the emergence of personal computers.
It's all in the name (Score:2)
MS: Actually my name is Microsoft Powers...
Clerk: It says here - name: Danger Powers
MS: No no no no no... Danger is my middle name
Clerk: Okay, Microsoft Danger Powers...
The smile (Score:2)
The Tao of Backup (Score:5, Interesting)
Claimed information from the inside (Score:5, Interesting)
"Epic fail" doesn't begin to describe this one.
Re:Why not store the data on phone permanent memor (Score:4, Informative)
Because the entire Sidekick architecture is very client-serverish, not transparent as with ordinary phones (GPRS/EDGE/UMTS/etc. through a NAT to internet at large); the server is supposed to be responsible for all that data, and the phone is just caching it. Given that architecture, asking why the local copy is on volatile RAM is analogous to asking why your CPU doesn't have a battery backup for system RAM, or even L2 cache.
That's one of the big reasons I didn't go with a sidekick, even though they have (or had, last I was shopping around) basically the cheapest internet plans available; they push all sorts of stuff that's handled by the phone in any other system off to the Danger servers,. While that does expose you to other people losing your data, as seen here, I didn't even consider that. I just like having a direct internet pipe, so I can run whatever software I want locally.
That said, there are plain benefits to the Sidekick model, for some people. Basically, if you don't want to do funny stuff on your phone, and if you're no less incompetent than the MS/Danger sysadmins, it's better. After all, if you drop your sidekick in a toilet, run over it with a truck, and vaporise it with a plasgun, you can just get a new one and have all your data back -- which is good, since if you're 95% of people, you've _never_ backed up your phone's data. But it's not for me, and given your desire to have your phone work as a PDA even if you power-cycle it in a wilderness/cave/other net-less place, it's not for you either.
Re:See it as an opportunity (Score:4, Insightful)
Now is the opportunity for opensource to show what it's good for. Someone whip together a small app to extract all info from the Sidekick, put it up on sourceforge for FREE and you have tons of goodwill for OSS. Of course, the app should be Linux-only, thus forcing all Sidekick users to install Ubuntu...
Thus eliminating any goodwill that would have been gained...
Really, if you think that open source is a viable option for the masses, you shouldn't care which operating system a powerful application like the one you describe is on. If you really care about using open source for goodwill, releasing it simultaneously on all operating systems should be your goal. How is forcing people to use Ubuntu via software applications any different from Microsoft forcing people to use Windows via software applications?
Re: (Score:2)
Or forcing them to the effort of sticking a live boot disk in, and maybe also making their system boot from CD.
Or forcing them to get the source and port it to Windows.
Re: (Score:2)
A better question is... do they run AMANDA [amanda.org]?
Based on this story, probably not.