Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Tech Magazine Loses June Issue, No Backup 245

Gareth writes "Business 2.0, a magazine published by Time, has been warning their readers against the hazards of not taking backups of computer files. So much so that in an article published by them in 2003, they 'likened backups to flossing — everyone knows it's important, but few devote enough thought or energy to it.' Last week, Business 2.0 got caught forgetting to floss as the magazine's editorial system crashed, wiping out all the work that had been done for its June issue. The backup server failed to back up."
This discussion has been archived. No new comments can be posted.

Tech Magazine Loses June Issue, No Backup

Comments Filter:
  • by Anonymous Coward on Wednesday May 02, 2007 @08:48AM (#18955155)
    Why isn't it a default for an OS to ask where the backups should go when it is installed? Backups are like any other kind of security ... necessary. You shouldn't have to hunt for the instructions on how to back-up, they should be in your face.
  • by Orange Crush ( 934731 ) * on Wednesday May 02, 2007 @09:08AM (#18955397)

    There aren't a lot of ways for a machine to "crash" that loses all its data. Even a lightning-fried hard drive can have its platters removed by a data recovery lab and many files can be pulled off. A mechanical failure doesn't grind the platters into sand. As a network server it really should have a RAID too. So how exactly can "the server crash" so spectacularly that the RAID, backups, and widely available data recovery services all fail? Did the building blow up?

  • Re:Wrong problem (Score:5, Interesting)

    by RetroGeek ( 206522 ) on Wednesday May 02, 2007 @09:30AM (#18955725) Homepage

    The troublesome process is the restore.

    I heard a story about a LAN admin who was doing backups every night. The tapes would go into a safe, then would go offsite, then be used again.

    Everything worked well(?) until they needed to do a restore. The tape in the safe was corrupt. The tape at the offsite storage was corrupt. No tape was good.

    It seems that the LAN admin made tea every morning. The electric kettle sat on top of the steel safe.

    So the backup tape was placed into the safe, then the kettle was started, magnetizing the safe, and erasing the tape.

    Not ONCE did anyone try to do a test restore to prove the system.
  • Re:Wrong problem (Score:4, Interesting)

    by mseeger ( 40923 ) on Wednesday May 02, 2007 @09:43AM (#18955915)
    > Would mirrored drives be a more effective solution?

    Yes and No:

    • Mirrored drives are a good protection against drive failures and (usually) offer an easy restore process. If you mirror a drive and put the copy away (e.g. into a safe) this is a real and widely used backup method. As always you should at least try once to boot the system while removing the primary disk. Somtimes RAID controllers have some irks too.
    • This method usually depends on the availability of a certain hardware, if you cannot get a new mainboard or raid controller of the same type, the mirrored disk contains data you may have trouble getting at. You may ignore this issue, if you have the same hardware at a safe location again.
    Regards, Martin
  • by Anonymous Coward on Wednesday May 02, 2007 @12:47PM (#18958625)

    I work for an IT department that had a serious crash that resulted in lost data.

    Contracts, reference materials, source code, passwords, you name it, some of it was lost.

    For us it happened because a tech without a lot of experience set up a new server many years ago. As years passed, he gained competence by leaps and bounds and the server he set up also gained a role as being relied on much more than it was originally expected to.

    One day a drive crashed (hard) and it was then we discovered that it was RAID 0. Yes, the wails of anguish were painful to listen to. The tech was sorrowful but somewhat supported by the fact that management knew they were sending a rookie when he set it up the first time and also by the fact that nobody ever checked to see if the rookie had done the job right. Of course, he would never make that mistake now, but we all make noob mistakes when we're noobs.

    So we sent the raid set out and they recovered the data, reconstructing piece by piece but many files were damaged beyond repair, some probably forever lost and the recovery process took a month when our department had come to rely on it for daily interaction.

    Backup time right? Nope, it turns out that the backup software had been so overburdened that the department which managed it had started making judgement calls about what needed to be purged and what needed to run. Everything critical was on tape it was reasoned... but recall this server was not originally intended to be a critical server, and thus had no tape backup. The disk based backups had long since been purged and this server was (still considered non-critical by the people managing the backups because they'd never been told differently) without any backup, save those made before its role changed, which for all purposes were useless.

    That is how it happens.

    I see a lot of comments saying that you must test your backups, you must make sure your backups are successful but I don't see a lot of comments relaying this first rule of backups: MAKE SURE YOU ARE BACKING UP WHAT IS ACTUALLY NEEDED.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...