Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Data Storage Hardware

Tech Magazine Loses June Issue, No Backup 245

Gareth writes "Business 2.0, a magazine published by Time, has been warning their readers against the hazards of not taking backups of computer files. So much so that in an article published by them in 2003, they 'likened backups to flossing — everyone knows it's important, but few devote enough thought or energy to it.' Last week, Business 2.0 got caught forgetting to floss as the magazine's editorial system crashed, wiping out all the work that had been done for its June issue. The backup server failed to back up."
This discussion has been archived. No new comments can be posted.

Tech Magazine Loses June Issue, No Backup

Comments Filter:
  • by ScentCone ( 795499 ) on Wednesday May 02, 2007 @08:47AM (#18955137)
    Maybe not so bad as losing your entire monthly product, per se... but it does happen. I'll bet their accounting, HR, and other back office systems are probably fine. This stuff is always ugliest at the department server level in smaller operations. I'll bet they get some good Mea Culpa 2.0 editorials out of it, though.
  • err... (Score:3, Insightful)

    by cosmocain ( 1060326 ) on Wednesday May 02, 2007 @08:48AM (#18955141)



    Business 2.0 never had to rely on their backup software until that day, which is why they probably did not realize that it was either obsolete or dysfunctional.

    sorry, their MAIN problem is not in any way a dysfunctional backup system. ever heard of verifying backuped data?
  • by ZachPruckowski ( 918562 ) <zachary.pruckowski@gmail.com> on Wednesday May 02, 2007 @08:48AM (#18955153)
    I imagine that they still can resemble a lot of it from other files - they should still have all the layout pieces for one, and all the authors ought to have at least rough drafts of their stories on their personal computers. The deadline's screwed, but they can probably get it out a few weeks late (or in July, depending on how often they normally publish).
  • by 91degrees ( 207121 ) on Wednesday May 02, 2007 @08:52AM (#18955197) Journal
    It seems unlikely to have crashed in such a way that a data recovery specialist would be unableto get most of the data back.

    But whatever the case - there is a useful lesson here. Make sure your backups are backing something up.
  • by Rob T Firefly ( 844560 ) on Wednesday May 02, 2007 @09:03AM (#18955323) Homepage Journal
    This reminds me of the recent uproar over a car crash involving the New Jersey governor. He was critically injured because he wasn't wearing his seatbelt, and people freaked, asking what sort of role model he could possibly be. I argued that he was an awesome role model, because sometimes people need to see a mistake end badly for someone else before they'll do what's necessary to protect themselves from making the same mistake. Seeing a high-profile magazine get hit like this can do the same for backup slackers the world over.

    I don't know about you people, but after reading this (and giving it the "haha" tag) I'm going home and catching up on a couple of backups I've been slacking off on for a while.
  • by Chris whatever ( 980992 ) on Wednesday May 02, 2007 @09:05AM (#18955349)
    Hum!!! Unless you are there looking at the data being backed up there is no way unless you get notification from your system that it has completed.

    usually that is the case but it has happened when one of my backup failed one night and someone needed a file restores from the previous day, if that company never checked it's backup or never configure some kind of noticaition upon failiure or success then they are very lame
  • by Anonymous Coward on Wednesday May 02, 2007 @09:13AM (#18955463)
    Actually that is what you get for having Geek Squad as your outsourced IT staff.

    honestly, they CANT have competent IT. The FIRST thing you do in the morning is check the backups.

    I have a HP sdat jukebox here and I STILL check the backup logs to make sure the backup and verify succeeded last night. if they dont I mirror the important files right away and then run a manual backup to not lose the last 24 hours of backup.

    I hope that Business 2.0 learned that paying top $$$ for competent IT is a good idea and they should run a article about it.
  • Wrong problem (Score:5, Insightful)

    by mseeger ( 40923 ) on Wednesday May 02, 2007 @09:15AM (#18955507)

    the problem was, as always, not the backup. I've rarely seen problems resulting from the backup process. The troublesome process is the restore. Or as a friend put it once:

    Nobody wants backups, what everybody wants is a restore.

    In my twenty years of IT i've seen several companies making backups like a well oiled machine. The backup process was well documented and everyone was trained to a degree, they could do it with their eyes closed. But everything fell apart in the critical moment, because all they had planned was making the backup. Nobody ever imagined or tried a restore on the grand scale. So they ended up with a big stack of tapes with unuseable data.

    Backup is the mean, not the goal.

    Regards, Martin

  • by Rob T Firefly ( 844560 ) on Wednesday May 02, 2007 @09:29AM (#18955715) Homepage Journal
    IANA publisher, but I would also imagine that in such a deadline-intensive business, data from a fried disk is about as good as lost. Sure, they can send their drives off to data recovery labs who could slowly recover an uncertain portion of the data for a pantload of money, but by the time that's done it'll be time for the next issue anyway. I'd guess it would be a lot quicker and cheaper to write off the disks and salvage what they can from everyone's local copies of the data.
  • by IdleTime ( 561841 ) on Wednesday May 02, 2007 @09:44AM (#18955925) Journal
    I am not surprised.

    There is not a week going by without me getting an issue from one of out regular analysts with question about how the customer can salvage their data because they don't have a backup. My standard answer is that we may be able to save some data, but it's going to cost a lot of $$$. And I also say: "When you don't have a backup, you have either deemed that you can easily recreate the data or that they are not important for the company"

    And these are not mom&pop companies but big multi million/billion dollar companies.
  • by igb ( 28052 ) on Wednesday May 02, 2007 @09:46AM (#18955959)
    `` I bet you anything that someone in IT asked for money for the RAID''

    Perhaps, although my experience is that IT people are incredibly bad at framing business cases in terms more compelling than my daughter's request for a mobile phone for her birthday: a few vague reasons, followed by a sulk when asked for specifics.

    I keep 20TB on RAID5, and replicate it daily to a RAID5 array that has no components or software above the spindle level in common (Solaris/EMC and Pillar Data). The data we really care about is on RAID 0+1, in some cases with three-way mirroring. We take it out to tape, in case the filesystem pukes over all the copies or the RAID controller decides to go bonkers. We're about to put ten miles between the two file servers. At no point have I had much pushback from management over the money, once the risks and rewards are explained. Too often, IT people convince themselves that some Dilbert-esque stereotype of a manager is going to say no, and therefore make their case in a passive-agressive style that will make anyone say no.


  • by Auntie Virus ( 772950 ) on Wednesday May 02, 2007 @09:48AM (#18955991)
    I have a HP sdat jukebox here and I STILL check the backup logs
    HP DAT? You'd better do more than check the logs. A test restore (if your users don't already test for you by deleting files) at least a few times a week might save your butt one day. Actually DAT or not, test restores are a must. Logs lie.
  • Re:Rag (Score:5, Insightful)

    by UbuntuDupe ( 970646 ) * on Wednesday May 02, 2007 @10:06AM (#18956227) Journal
    nobody reads Business 2.0 anyway.

    I wish. I wish people didn't read Time, either (the publisher), but they do. Time's writing style is the dumbed down, try-to-be-hip crap I wouldn't have gotten away with in sixth grade. Seriously. Like I said before [slashdot.org], to understand why its writing is like fingernails on a blackboard for me, consider how the same information would be conveyed by two sources:

    8-year-old: "6 divided by 3 is 2."

    Time magazine: "Okay, imagine you've got a half-dozen widgets, churned out of the ol' Widget Factory on Fifth and Main. Now, say you've gotta divvy 'em up into little chunklets -- a doable three, let's say -- and each chunklet has the same number that math professor Gregory Beckens at Overinflated Ego University calls a 'quotient'. The so-called 'quotient' in this case? Dos."

    Based on how that post got modded, I'm not alone in this.
  • by Hoi Polloi ( 522990 ) on Wednesday May 02, 2007 @10:30AM (#18956521) Journal
    I wouldn't use the term "role model" for things like that. I'd say "examples" is the better word. The governor was an example of what NOT to do.
  • by Stu101 ( 1031686 ) on Wednesday May 02, 2007 @11:11AM (#18957169) Homepage
    This is my story and I bullshit you not! I work for a manufacturing company, the second largest in its field in the world. Great. However the boss really does not like spending money. We eventually got a backup system using offsite backups (with a special client) and it seems to work ok. However, when it got to 100 GBs I was told to start pruning stuff. So I did. Long and short of it, even with the most important files backed up, we still have most things not backed up. Basically I have almost half a TB of data that I am not allowed to back up because its expensive. I can only backup 5 days worth of data as they are unwilling to pay anymore money for it. The fun will come when someone wants a restore from last year. This people, is the reality sometimes. Me, well, I really dont care anymore. Im sick of having servers, important, mission critical machines sitting on single IDE disks. We sell online, great, problem is our firewall is non redundant single IDE disk. If it goes (like it has in the past) we were down for days, loosing emails, web traffic, web orders, remote ordering systems, EDI data, remote sessions, ftp, everything. DR? the solution proposed by upper management is, oh we will buy some dells and restore. Yeah thats a good idea. After waiting a week for them to arrive, what exactly are you going to restore ? This is more typical than you think, unfortunatly. Im just the guy that has to make do with what i can. No doubt when it fucks up I will be blamed.
  • by Anonymous Coward on Wednesday May 02, 2007 @12:06PM (#18957991)
    i call bullshit. too expensive? so they would rather lose a days worth of sales (at minimum), then spend $200 on a 750GB hard drive in a usb case to offload stuff and throw it in a cabinet?
  • by Sandbags ( 964742 ) on Wednesday May 02, 2007 @12:30PM (#18958357) Journal
    I work for a backup company that makes D2D backup appliances supporting more than 20 operating systems.

    First, no one really understands best practices for backup, and a lot of systems that are backed up "successfully" can't be restored anyway (in fact, most commonly this is Microsoft Exchange, the most important system in most companies!). Second, Tape sucks! You MUST have Disk-to-disk backups to have any true recoverability in today's world. Third, check you logs EVERY day, there's no excuse! Fixing a failing backup should be the number 1 priority second only to an actual failed server you are recovering. Next, nobody spends enough on IT disaster recovery, and no one documents the recovery process properly. Your IT spending on DR should be approximately 25% or more of your total IT budget for server systems. At least 1 day per month should be used to practice system recovery or update the documentation covering it. Next, nothing should ever be considered backed up until the server has been test recovered, completely from scratch, at least once. At least some data should be recovered from backup media every day just to be certain it can be done when needed. The test recovery should be of a random critical data folder, or database, not the same stuff each time.

    Off-site DR is also important. Making sure that your entire data set for all critical systems is moved off site every 24 hours is a must. Included in this should be any media required to process a restore (not just the backups, but the install CDs, BareMetal recovery disks, licenses keys for all servers and applications, the DR documentation itself, network architecture information, hardware and software configuration of each server, and all information regarding your ISP contract, and system warranties from each manufacturer. If you don't have all this stuff, contract someone who knows what they are doing to make it for you.

    For each unique mission critical system you have (Mail, critical database server that allow the business to operate, point application server, Citrix box, etc) you should have a complete spare system meeting the system requirements so that system can be restored immediately in the event of a system outage. Your system recovery tests should be performed regularly to that hardware. Best practice is also to keep those test boxes off-site when possible, but nearby enough to get in a jiffy. If you don't have spare lab equipment, and don't have enough budget to have it, you can't afford to have those critical systems in house, and should consider outsourcing a data center who does have those resources. Clustering is complicated and expensive, but spare chassis and a few spare drives don't amount to a huge IT burden. You don't have to have 1 for each server, just one that can handle the job of each unique mission critical system (if you have 5 SQL servers, 1 exchange, 1 citrix, and 4 file servers, you only need 4 total spare system).

    The average business that goes through a critical system disaster that interrupts business for more than 48 hours requires 1 month of revenue to overcome the loss of each day of downtime. 40% of businesses that have a site disaster lasting more than 3 days go bankrupt within 90 days of the event. How much money will your business loose if you have to roll your purchase database back 2 days and loose all records of those transactions? How will your business survive if e-mail is out for 3 days? How much will you loose if your online store is gone for several days? How many customers will you loose if your support department is off-line for 2 days? How much will you be sued for if you miss a contractual deadline due to data loss? Can you afford to NOT spend the money to make sure this doesn't happen!?!?!
  • Re:err... (Score:4, Insightful)

    by Speare ( 84249 ) on Wednesday May 02, 2007 @01:04PM (#18958975) Homepage Journal

    "we will buy new when those fail" is what we were told

    "Your successor will buy new when these fail." is the correct response to this.

  • by geek2k5 ( 882748 ) on Wednesday May 02, 2007 @01:49PM (#18959669)

    If management is going to brag about cost savings, make sure that you get documentation on their comments and your warnings. That way, if/when things turn to slime, you can put it in your resume that you tried to warn them.

    This may be needed for your personal recovery plan. It may also be needed if lawyers get involved and you end up facing charges.

Do not underestimate the value of print statements for debugging.