Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Software Linux

Backing Up is Hard to Do? 299

Joe Barr writes "NewsForge is running a story this morning on a personal hardware/software backup solution for your Linux desktop (NewsForge is owned by Slashdot's parent OSTG). The solution doesn't require a SCSI controller, or tape drive, or the ability to grok a scripting language or archiving tool to work, either. It's based on point-and-click free software. Plus it includes a dead-parrot joke by Linus Torvalds."
This discussion has been archived. No new comments can be posted.

Backing Up is Hard to Do?

Comments Filter:
  • by Anonymous Coward on Thursday January 13, 2005 @02:02PM (#11351012)
    Backing up isn't painful, restoring is.
    • I like rdiff-backup [nongnu.org] because it makes reverse diffs, meaning the files on the backup disks are always the most recent ones with older changes in diff files. It works over standard ssh and is fairly easy to setup (Some scripting required, or at least doesn't hurt). No gui though.
    • Re:Backup painful? (Score:5, Informative)

      by khrtt ( 701691 ) on Thursday January 13, 2005 @02:35PM (#11351511)
      Here's my solution

      Backup:
      tar -czf backup.tar.gz /home /etc
      Then use k3b or something to record the file to CD

      Restore:
      Take a wild guess:-)

      Restore individual files:
      Use mc to browse the tarball (slow but works)

      Now, do you see me bragging about this trivial shit on slashdot? No?

      Eh, wait...
      • by Jim Hall ( 2985 ) on Thursday January 13, 2005 @05:07PM (#11353024) Homepage

        I have a 20GB iPod, but only about 12GB is used. My $HOME is about 2GB, including a bunch of digital photos, but also a bunch of documents, my email, and other stuff I'd rather not lose.

        My solution is simple:

        1. Plug in iPod

        2. Run ~/bin/backup.sh
          This is a very simple shell script that deletes the backup file already on the iPod, then does a 'tar czf - $HOME' and pipes it into gpg using circular encryption (that is, a passphrase.) The encrypted, compressed tarball (about 1.7GB) is written directly to the iPod. Takes about 20 minutes.

        3. Eject the iPod

        4. Done!

        I've used this backup copy to do restores, and it's really as simple as plugging in the iPod, using gpg to descrypt the file, piping that into 'tar xvzf -' to re-create my $HOME. I can move all my stuff back to where it needs to be after that.

        (For those who wonder: I always make an encrypted backup file in case my iPod is ever lost or stolen. Sure, the bad guy can probably run something to brute force the passphrase, if that's something he's interested in doing, but it's a tough passphrase. I don't worry about it so much, and it's "only" email and family photos.)

      • tar -lcvf /backup/$HOSTNAME-$DATE.tar /

        (I'm the type that creates one big ass partition for /). The -l switch tells tar to stay on the same FS, as /backup is NFS mounted to a RAID array. Thus, I just backup the local machine, without having to specify which directories to backup, and which to skip.

        Restoration, I do the lazy way:
        mkdir test
        cd test
        tar -xvf /backup/whatever.tar

        and then I grab the files (the RAID array usually has plenty of space).
      • backup2l (script) (Score:3, Informative)

        by Horizon_99 ( 58767 )
        backup2l [sourceforge.net] does a great job at figuring out which files are new or have been modified for incremental backups. Easy to configure a very lightweight.
    • Backing up isn't painful, restoring is.

      Restoring is a pain when the backups are incomplete, or backup media is faulty (quite common). Instead of have a backup of the complete system, they just backup user data chancing that reinstalling the OS and then restore should be a breeze. Ouch! Now they have to install numerous vendor patches, as well as other undocumented tweaks done to the system before restoring. Yup, restoring here can be painful. The only funny thing about parent post is the moderators modd

      • Full agree with parent.

        When you backup to DVD or CD-ROM make sure to run a verify after every backup!

        Verify means comparing the files that you meant to write to those that actually read back from the disc (diff /home/blah/bigfile /mnt/cdrom/bigfile).

        You'd be surprised how often many CD/DVD-writers screw up some files or even the whole disc. If you skip the verify you'll learn it the hard way.
    • I'm still trying to find the right combination for backup/restore on OSX.

      First off, I don't do the default OSX install. I always slice up the partition (not partition the drive, this is a *bsd type OS people! :P) as described in this article [macosxhints.com] that I wrote for MacOSXHints a couple of years ago.

      Then on a semi-regular basis, I will (should rather) clone everything save /Users using
      Carbon Copy Cloner [bombich.com]. This would be applicable to other *nix's of making a bootable clone minus /home. Ghost for Unix perhaps?

      T
  • Here is the dead parrot joke for those too lazy to read the article:

    That implies that you didn't unmount it before powering it down, which is bad (it's like removing a floppy without unmounting it). If you really want to use it that way, then try supermount or some of the other "on-the-fly" mount utilities. Or do you mean that the disk just powered down on its own, and is just sleeping? If so, everything is fine, and it should come right back up when it's needed. "It's not dead, it's just sleeping."

    • by Anonymous Coward
      Dead Parrot Sketch

      The cast:

      MR. PRALINE John Cleese
      SHOP OWNER Michael Palin

      The sketch:
      A customer enters a pet shop.

      Mr. Praline: 'Ello, I wish to register a complaint.

      (The owner does not respond.)

      Mr. Praline: 'Ello, Miss?

      Owner: What do you mean "miss"?

      Mr. Praline: I'm sorry, I have a cold. I wish to make a complaint!

      Owner: We're closin' for lunch.

      Mr. Praline: Never mind that, my lad. I wish to complain about this parrot what I purc
  • by Megaslow ( 694447 ) * on Thursday January 13, 2005 @02:07PM (#11351104) Homepage
    I use rsnapshot [rsnapshot.org] to automate my backups to another host. Works like a dream, providing multiple virtual point in time copies (just like similar functionality from Network Appliance, etc.).
    • I'll second the use of rsnapshot. I use it for remote backups for several servers and it works well.

      For those who do not know, rsnapshot uses rsync to backup. What makes it unique is its ability to use hard links to keep full copies of a particular backup (ie during the restore, go into the folder you want and copy the back .. no need to shift tapes or do a full + incrementals, etc..)

      rsnapshot is run via cron so you can configure it to email when it runs (to verify correct operation).

      I have had to restor
  • Several examples of an easy backup. In KDE, drag and drop and select "Copy". Duh...
    For the more typing inclined people, create a directory and do this:

    rsync -av --delete --no-whole-file /folder-to-backup/ /backupfolder

    • That works, but it doesn't work for any server functions that a /.'er might have running on their home system. (I realize that is beyond the scope of the article -- which I didn't read). MySQL and any web/cgi data won't work if rsync is occuring while people might be using that data. Similarly setting a cron job to rsync might cause problems if you are accidentally editing that data while it is being used.
  • by ceswiedler ( 165311 ) * <chris@swiedler.org> on Thursday January 13, 2005 @02:08PM (#11351121)
    The best way to create differential backups under Unix is with hardlinked snapshots. Easy Automated Snapshot-Style Backups with Rsync [mikerubel.org] has a good explanation of how to do this. The best part is that restoring is as simple as copying a file. Each snapshot is a folder hierarchy on disk, and you can browse through any snapshot and find files you want.

    One small improvement over rsync (IMO) is to use mkzftree from the zisofs-tools [freshmeat.net] package. It's designed to create compressed ISO filesystems which will be transparently uncompressed when mounted under Linux (and other supporting operating systems; it's a documented ISO extension). mkzftree supports an option for creating hardlinked forest (like cp -al and rsync), with the advantage that the files are compressed, thus saving space. ISO isn't quite as flexible as ext2 for things like hardlinks, so what I do is have DVD-sized disk images formatted as ext2 to store the snapshots. I burn the disk images directly to DVD; each one can hold ten or twenty compressed snapshots (of my data anyway). The disadvantage is that I can't read the files directly (because they're compressed, and the transparent decompression only works with ISO) but it's easy to decompress a file or folder to /tmp using mkzftree if I need to restore something.

    It shouldn't be hard to make the transparent decompression code work with other filesystems than ISO, as long as they're mounted read-only. The files are just gzipped with a header block indicating they are compressed.
  • Easy (Score:5, Interesting)

    by harlows_monkeys ( 106428 ) on Thursday January 13, 2005 @02:09PM (#11351129) Homepage
    Here's what I do:

    1. Reach over and plug in USB 120 gig drive.

    2. Become root, and go to /root.

    3. Type "./backup.sh".

    That is a script that goes to all the directories I care about (/root, /etc, /srv/www, /usr/local/share, and my home directory), and basically does this for each drive.

    cd $DIR rsync -avz --progress --delete . $MNT/$DIR

    where $MNT is where the USB drive mounts.

    4. Unmount the drive and unplug it.

    This is quick (a few minutes) and easy, and since rsync reads the files from the last backup to figure out what needs to be copied, it should catch it if I develop a bad sector on the USB drive.

    I left it out in the above, but the backup script also, before doing the rsyncs, lists my crontab into a file, so that gets backed up.

    • That's not easy- that's foolish. What if you discover a file is corrupt/missing AFTER you do one of your backups?

      There's a reason incremental backups have been around for two plus decades, and "update the difference between two drives with rsync" is not "incremental".

      If you were going to reply and say "oh, but I only do it every X weeks", well- you'll now loose weeks of work if you loose a file/drive.

      • Re:no incremental (Score:3, Informative)

        by bloosqr ( 33593 )
        Actually we use rsync for incremental backups [mikerubel.org] and it works quite well. Its a simple modification or scripting of rsync commands and can be all scripted away pretty easily..

        b-loo
      • Incremental above, with creating images of the incremental stuff with the current date-time stamp in the file name. then dvd-record out the images when you get some stored up.

        I do it over nfs and smbfs mounts using rsync to dvd-r's and dvd-rw's.
      • ### That's not easy- that's foolish.

        Yep, but luckily rsync provides the options --backup and --backup-dir which makes it easy to create increments. Actually not exactly increments, but the files deleted between the previous and the last rsync run, but in the end they serve pretty much the same.
    • by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Thursday January 13, 2005 @03:15PM (#11352023) Homepage Journal
      That's not a backup - that's a userland implementation of RAID 1 with very high latency.

      I make daily differential backups (via AMANDA) to a rotating set of 12 tapes. If I accidentally delete /etc/shadow or some other important file, I have nearly two weeks to discover the problem and restore a previous version from tape. Your idea gives you, oh, until about the time that rsync discovers the missing file and dutifully nukes it from your "backup" drive.

      What you're doing is certainly better than nothing, but it's not a backup solution by any definition of the term beyond "keeps zero or one copy of the file somewhere else".

      Far, far better would be for your script to use dump or tar to create incremental backup files on your USB drive and to rotate them out on a regular basis.

    • The only problem with this solution is that there is no encryption. If you plan on storing your backups offsite, then you should probably consider having it encrypted.

      I've yet to find an adequate FOSS backup solution that meets the following requirements:

      1. Able to backup only specified directories (I really don't need a backup of /usr/bin).

      2. Backups are strongly encrypted.

      3. Backups are fault-tolerent. (If I lose one byte in the middle of a CD, I don't want to lose the whole thing.)

      Right now my
    • yuck.

      I simply do the following in a cron job that run's nightly.

      tar -czvf /dev/st0 /; mt -d /dev/st0 eject

      this dumps the contents of all drives to the SDLT tape drive. One tape 240Gig of files... and then eject's the tape. now swaping the tapes in each server every morning is the hard part... every tape is a complete backup that way i have 5 copies of all files incremently 1 day older than the other on hand and 16 copies off site. 4 weekly's and 12 monthlies.

      the larger database server has a jukebox
    • Rsyncing to a mounted drive is very useful, and a good way to do things on machines that are nearby. What about remote co-lo machines, though? I've been puzzling trying to find a method to do incremental backups (i.e. NOT dumping the whole system once a week/month) of a co-located system across the net. I need to preserve permissions and ownerships, and as I said, it must be incremental (using rsyinc this should be easy, at least). The hard part, though, is doing it securely. NFS mounts ... are not ver
  • by supergwiz ( 641155 ) on Thursday January 13, 2005 @02:09PM (#11351135)
    Someone taught me a cool trick to backup up all files with the highest possible compression ratio and speed: mv * /dev/null
    • That's really cool, but it's a write-only backup...
      • What? (Score:3, Funny)

        by MattHaffner ( 101554 )

        That's really cool, but it's a write-only backup...

        You can read from /dev/null just fine:

        # for x in *; cp /dev/null $x; done

        will restore a whole directory's worth of files back to what's stored in the backup. If you want to make an exact copy of the whole filessytem stored in the backup:

        # rm -rf /; cp /dev/null .

        Now, that's just off the top of my head, so I won't take any blame (or credit) if you try that out on your own system.

    • Ironically, that probably isn't the highest possible speed.

      We used to take "bogus" backups from our DB and run them to /dev/null, however on our OS, /dev/null was not multi-threaded. We actually found that backing up to our disk array was faster, and then we could just rm it afterwards.

      Of course, now that I think about it, since mv is just a pointer command, it might be... cp * /dev/null would be slow, though.

      -WS
    • And hey, you can probably even restore it... But that's what you want, right? Fast backup and slow restore process... with hex tools on /dev/hda
  • There's Dar [linux.free.fr] (as mentioned in the article) and also bacula [bacula.org] for remote backups - go check them out if they're new to you.
  • In case of /. ing (Score:3, Interesting)

    by Pig Hogger ( 10379 ) <.moc.liamg. .ta. .reggoh.gip.> on Thursday January 13, 2005 @02:11PM (#11351158) Journal
    A hard drive crash over the holidays left me scrambling to get back to a productive desktop as quickly as possible. Luckily, I had my /home partition on a separate drive, so I didn't lose precious email, stories, research, and pictures. But it did get me thinking about my lack of preparedness. Where was the back-up system I've talked about for years, but never acquired? This is the tale of how I rectified that glaring omission, and built myself a personal back-up system using inexpensive parts and free software.

    The hardware

    My desktop machine includes three IDE drives and an ATAPI CD-ROM drive. I have Debian installed on hda, SUSE on hdc, and my /home directory on hdd. Backing up directly to CD would be too slow and too cumbersome for me, so the first thing I needed was some new hardware.

    In the past I've researched tape drives and found that for a decent drive, I would also have to add a SCSI controller. Those two items can be pretty pricey. I opted for a less expensive configuration.

    I decided to go with a removable IDE drive, connected via USB. I bought a 3.5-inch hard disk enclosure with USB 2.0 connectivity on eBay. It cost roughly $45, including shipping. With three drives to backup, I needed a large-capacity IDE drive to hold all the data. It turns out I already had one, just waiting for me to use. I raided the stash of goodies I've been hoarding to build a killer MythTV box and found a 250GB Hitachi DeskStar -- just what the doctor ordered. I got it on sale at Fry's Electronics a couple of months ago for $189.

    I have the mechanical skills of a three-toed sloth, but I still managed to cobble together the drive and the enclosure, neither of which came with directions. Four screws hold the faceplate on the enclosure, and four more hold the drive in place inside. Even I was able to puzzle it out.

    The most difficult part was the stiffness of the IDE cable running between the faceplate and the drive. In hindsight, I recommend connecting the power and data cables from the faceplate to the drive before screwing the drive in place inside the enclosure. I also recommend not forgetting to slide the top of the enclosure back in place before reattaching the faceplate.

    I connected the USB cable to the enclosure and the PC and powered on. Using the SUSE partitioning tool, I created an ext3 filesystem and formatted it on the Hitachi drive, using the default maximum start and stop cylinders. That worked, but there was a problem. My great big 250GB drive yielded only 32GB.

    One of my OSTG cohorts asked if had clipped the drive for 32GB max, but I had done no such thing. All I did was check to see how the drive was strapped out of the box. It was set to Cable Select, which was fine with me, so I left it like that. His question worried me, though, because I had never heard of a 32GB clip thingie before.

    I called Hitachi support to find out what was up with that. Their tech support answered quickly. When I explained what was going on, he agreed that it sounded like it was clipped to limit its capacity. This functionality allows these big honkers to be used on old systems which simply cannot see that much space. Without it, the drive would be completely unusable on those machines.

    I asked why in the world they would ship 250GB drives configured for a max of 32GB by default, and he denied that they had. He asked where I got the drive, then suggested that Fry's had "clipped" it for some reason. There are jumper settings to limit the capacity, but my drive had not been jumpered that way. Perhaps Fry's sold me a returned drive that a customer had "clipped", then returned the jumpers to their original position. We'll never know.

    The tech told me how it should be jumpered for Cable Select without reducing capacity. I opened the USB enclosure, pulled out the drive, and found it was already jumpered as he described. Undaunted, I pressed on.

    On the Hitachi support page for the drive, I found a downloadable tool wh
  • 4.7GB each side. Being in cartridges, they supposedly have a much longer lifetime than DVD-RW. Plenty for weekly full backups. I don't backup the system directories like /usr/bin because they are from an install. I backup /etc/ /home /root /usr/local /var/{various}. No need for anything fancy.
  • Heh, noob mistake (Score:5, Interesting)

    by stratjakt ( 596332 ) on Thursday January 13, 2005 @02:14PM (#11351214) Journal
    He plugs in a USB drive, runs KDar to fill it with stuff.

    Now, when his system borks, how does he restore? Or did he think that far ahead?

    I skimmed the article, and nothing about restoring. Your backup is useless if you can't restore it.

    Does he have to install and configure linux, X, and KDE just to be able to access KDar?

    Forget all this jibberjabber, and emerge or apt-get or type whatever command you use to get Mondo/Mindi. Just perfect for home boxes, and most other use.

    Burn yourself a bootable CD that can recreate your box, just like Norton Ghost for Linux. I have it write out the iso files and boot disk for /bin /usr, etc, which I then burn onto a couple of DVD9-Rs. I can run this to recreate my system.

    I run a seperate job to backup /home.

    Whats important, is to seperate system from user data when it comes to backups. This also forms my "archiving" system, since old "/home" backups stick around, so if I want to take a look at the version of foo.c I was writing 6 months ago, it's easy enough to find.

    As much as I love Mondo/Mindi, it's not the be-all and end-all. AMANDA is a better choice for a corporate (more elaborate) environment. It's a PITA and not worth getting involved with for a simple user box.

  • by diamondsw ( 685967 ) on Thursday January 13, 2005 @02:18PM (#11351262)
    Just do an NFS or SMB mount:

    mount -t smbfs -o username:password \\10.0.1.111\backup /mnt/backup
    cd /mnt/drive
    tar -cvjf /mnt/backup .

    (If I recall the commands correctly.) I use this all the time to make quick snapshots of my Gentoo installation before emerging some bleeding edge package.
  • "A known bug in the current version (1.3.1) prevents the restoration of individual files or directories from an archive at present, but that may be fixed in the next release."

    i don;t know about everyone else, but isn't that one of those things that should have come up pretty early in BETA testing?
    "Great backup program.. too bad it can't restore"

  • 1. External USB drive. (Actually two, so that one can be on the shelf and one plugged in). The drive is bigger than my existing system disk, and partitioned as one big bootable partition.

    2. A cron job that runs 4 times a day that does

    for DEST in /media/usb* ; do
    if [ -d $DEST/home ] ; then
    rsync -aSuvrx --delete / /boot /home /usr /var /mp3s $DEST
    fi
    done

    If anything went wrong with the main disks, it would be pretty simple to get grub installed on the USB drive, and whip
  • Personally I use the same sort of setup but use rdiff-backup to do the actual backup/restoration. It's really nice because I now have nightly differential backups of my system without consuming a large amount of disk space.

    Also, rdiff-backup allows for remote operations. So you can have a central server back up many desktops, with relative ease. It doesn't have a nice GUI, but then again, I'm running it all through a cron job anyhow, so who cares.

    Restoration is a breeze because the most recent snapshot
  • by vlad_petric ( 94134 ) on Thursday January 13, 2005 @02:25PM (#11351360) Homepage
    It's the Swiss Army knife [sf.net] of backing up. It can backup stuff over samba, ssh/rsync, ssh/rsyncd, ssh/tar, direct file access (in other words it doesn't need special software installed on the clients). It keeps a single copy of multiple, identical files, so backing up a bunch of Windoze machines can be done with decent amount of space.

    Restore is also straightforward - it can be done in place, or by downloading a zip/tar file.

  • I just dropped my big drive into an old box (P90 from the garbage) and use it via sftp and konqueror's built in handler. The only trick was to use another disk to boot it, and not fool with BIOS other than that so that the kernel sees the big drive and bios ignores it. Everything to be backed up gets written to CDROM then moved to the big archive. Current stuff gets a copy on my local machine and a copy on the archive drive. It's not perfect, but it seems to work well enough. Everything active has two
  • I just do full weekly images of my system. I've got two 160Gig Seagates. On one of them there's Win2K and Gentoo installs. About once a week I just boot up a linux floopy and dd either the whole 160G image or whatever partitions changed.

    What I like about this is if I always have a week ago fall back if I mess something up. Or if the original drive fails I just swap the backup in less than 1 min.

    And yes I also do select daily data backups (email, etc.)

  • KDar? (Score:5, Funny)

    by MonkeyCookie ( 657433 ) on Thursday January 13, 2005 @02:26PM (#11351383)
    Is that some kind of sense that allows you to pick out other KDE users in a crowd?
  • Backups onto USB removable drives?

    Not if you _care_ about your data! The drivers for this stuff seem to be very betaish. Lockups, garbled writes, non-standard implementation.

    It's just not worth the hassle...
  • Is there a snapshot-capable filesystem on linux that can backup off of the snapshot, so you don't have to stop your production apps?

    I mean, after all, who cares if he backs up to DVD or CD or network, or whatever. We all know linux is good at moving data. I usually backup with tar -cz to a tarball on CD, and I can restore from this from a minimum CD boot, and I don't get the idea to brag about it on slashdot either.

    Choosing this or that media is a non-problem, as long as you understand the difference be
    • The example of this I've heard is LVM and XFS. I believe that XFS allows you to freeze changes to a filesystem, and LVM can write a snapshot of a partition to another disk. Then you backup from the snapshot.
  • The biggest problem with backing up a live system is maintaining consistancy during the backup - a backup can take hours, and if the system is changing state during that time, you can have the last half of the backup being inconsistant with the first half.

    For example - you might have something that changes both /usr/a/foo and /usr/z/bar at the same time, but if /usr/a/foo gets backed up, then thirty minutes laster /usr/z/bar gets backed up, that is a thirty minute window in which a change can happen and re
  • Lately I've been using Unison to back up to my ipod: http://www.cis.upenn.edu/~bcpierce/unison/ [upenn.edu] It's cross-platform and works on mounted file systems as well as ssh. Also if I add documents on my ipod, changes are made to the source (if I say so). All in all, it's a great little (free) tool.
  • I'm sure there are better ways, but here's a paraphrase of my backup.sh. Whole thing took about 25 minutes with testing.

    today=$(date +%F)

    zip -r -u /hd2/backups.zip /collected
    cp -v -f -u /hd2/backups.zip /hd2/iso
    rm -r -f /hd2/iso/date*
    mkisofs -r -J -o /hd2/image_$today /hd2/iso
    cdrecord -v -speed=16 dev=0,0,0 -data /hd2/image_$today
    eject cdrom1

    Where "collected" are files copied to the local machine through a series of smb mounts and copied across the network as well as important files on the local system t
  • Pathetic (Score:2, Informative)

    by agw ( 6387 )
    Copying data to a single IDE drive and calling it "backup" is just pathetic.

    He should read the Tao of Backup http://www.taobackup.com/ [taobackup.com] and be enlightend.
  • I have an old machine with a removeable IDE drive bay in it and a fair number of ide disks in removeable caddys. I simply backup to that disk (scripts that tar/gzip the data) and when the week is over, shut the machine down and swap in a new disk. I could probably even hot swap it, but I would rather make really sure everything got written to disk. Once nice advantage of this is I can read my backups on any machine with an IDE controller, plus when I need more space I just upgrade to bigger disks and use th
  • I manage the tech/training for student publications at a university. Our server is an Apple Xserve supplied by OIT. It works great but we do not let them back it up because it is prohibitively expensive.

    Our solution was to buy a couple 160GB FireWire LaCie hard drives [lacie.com]. They have heavy-duty aluminum cases and USB2, FireWire and FireWire 800 interfaces. I use CMS Products' free BounceBack Backup Express [cmsproducts.com] software to automatically syncronize the files on the server to the files on the hard disks.

    It works g
  • by wernst ( 536414 ) on Thursday January 13, 2005 @02:43PM (#11351607) Homepage
    Why even worry about mounting and unmounting volumes from Linux? I just use Norton Ghost, which has been happily backing up my Linux partitions (or whole drives, pretty much regardless of partition types) for a few years now.

    With just one or two boot floppies, I can back up and restore my Linux drives to either: other internal IDE drives, other parititons on the same drive, external USB1 and USB2 drives, burnable CDs, or burnable DVDs.

    Heck, it is so fast and reliable, I've been known to backup the drive just before even *trying out* new software or options, and if I don't like it, I just Ghost it back to how it was.

    Now, I know it isn't free, or even Linux based, but it is hard to argue with cheap, reliable, and fast backup procedures that just work all the time...

  • And that is easy (Score:5, Interesting)

    by flibuste ( 523578 ) on Thursday January 13, 2005 @02:43PM (#11351611)

    I've read the whole article. My! You'd better be a geek to have to cope with all the little worries..

    Getting cheap AND working hardware on E-Bay. My mom will not do it for the sake of her computer.

    32GB limitation by jumpers. Not obvious for an end-user.

    Booting up *nixes from various drives in order to access the limited drive, then fiddle with partitions. I still don't dare touching my configs for more than OS at a time. Let alone various OSes on various drives.

    Compiling KDart?! Compiling what? What do I have to do? "Comp..??" You have to admit, it's not for the dummy kind.

    Definitely not "Backup made easy" but "Made not so expensive" since the price tag still reaches 300$ (drive + box from e-bay + screws + shots of valium to calm you down when your machine refuses to boot after all the offence you just did to it).

    I bought Linux Hacks [amazon.ca]. This, Webmin [webmin.com] and a remote machine accessible using Samba or sftp does the daily backup just fine.

  • There are a million easy to use backup programs for Windows, yet I hardly know anyone that backs up thier home machines. I'm all for easy-to-use GUI tools, and if this is what it takes for someone to do backups, then so much the better. But people dont generally do back ups not because it's hard to do, but because it's inconvienent or requires specific user intervention. People will spend hours downloading and burning movies, but they wont spend 5 minutes putting thier email, bookmarks, and data directorie
  • From the kdar home page, it looks like it keeps a catalogue of the files in the archive, but does not keep the catalogue with the archive.

    If you loose the disk that was backed up, and the catalogue with it, is the kdar archive file useless ?

    I use rsync to keep track of daily changes, and tapes to make backups. Tapes have the advatage of not showing up as a drive than can be destroyed if the system gets hacked.

  • With rdiff-backup, backup dozens of gigabytes effortlessly and restore as effortlessly at any point in time. Add it in a nightly cron job and you are golden !

    From the description : "rdiff-backup backs up one directory to another, possibly over a network. The target directory ends up a copy of the source directory, but extra reverse diffs are stored in a special subdirectory of that target directory, so you can still recover files lost some time ago. The idea is to combine the best features of a mirror and

  • A network harddrive is a good solution for a small office. Low Power, fast transfer, large capacities.
  • by ywwg ( 20925 ) on Thursday January 13, 2005 @02:54PM (#11351741) Homepage
    if [ `df |grep /media/BACKUP |wc -l` == "0" ]
    then
    echo Backup drive not mounted, skipping procedure
    exit 2
    fi
    cd /media/BACKUP
    nice -n 10 rsync -va --exclude-from=/root/exclude $1 $2 $3 $4 $5 / .
    where /root/exclude contains:
    /mnt
    /proc
    /tmp
    /udev
    /sys
    /media
    Not the prettiest implimentation, but it works.
  • by grumbel ( 592662 ) <grumbel+slashdot@gmail.com> on Thursday January 13, 2005 @03:07PM (#11351905) Homepage
    In case you have a seperate computer or a seperate drive one can use rsync to relativly easily create backups, its just a few lines of Shell:

    rsync -e ssh \
    --delete \
    --relative \
    --archive \
    --verbose \
    --compress \
    --recursive \
    --exclude-from list_of_files_you_don't_wanna_backup \
    --backup \
    --backup-dir=/backup/`date -I` \ /your_directory_to_backup \
    user@other_host:/backup/current/

    This command mirrors everything in /your_directory_to_backup to user@other_host:/backup/current/ and in addition to that keeps all the changes you did to that directory in a seperate 'dated' directory like /backup/2005-01-15, so you can also recover files that you deleted some days ago. Some other posters seem to have missed the '--backup' option, which is why I repost the rsync trick.

    Disadvantage is that you can't easily restore an exact old state of the directory which you backuped, however you can retrieve all the files very easily.

    There are also floating some shell scripts around which add to the above rsync line some vodoo to hardlink the different dated directories, so that you have a normal browsable copy of each and every day while only wasting the space for the changes.

    And there are also tools which optimize this whole thing a bit more, by compressing the changes you did to files, like http://www.nongnu.org/rdiff-backup/

    However overall I found the plain rsync solution the most pratical, since it doesn't require special tools to access the repo and 'just works' the way I need it.
  • Backup? (Score:2, Informative)

    by SilverspurG ( 844751 )
    The best solution for your backup problems is to learn to prioritize. No, you don't need to save your pr0n collection. No, you don't need to save every .jpg anyone's ever sent to you. No, you don't need to save every bad joke e-mail you've ever received. No, you don't need to save... you don't need to save... don't save... don't need.

    When I was young (early 20s) I saved everything. Then I had an HD crash. I started over and, several years later, my new HD inherited an unrecoverable problem. I starte
  • I still think that removable media (e.g. tape) is the most effective form of backup [baheyeldin.com].

    Under Linux, a tape drive can be used effectively to backup a home network [baheyeldin.com], specially when you have offsite storage (e.g. take the monthly backup to a friend or to your work).

    Granted, this is only for 10 or 20 GB worth of data, but I am not even half there yet. This does not apply to guys who have a, let's say extensive, collection of movies, or have a huge set of, ahem, images.

  • by TrevorB ( 57780 ) on Thursday January 13, 2005 @03:38PM (#11352357) Homepage
    One flaw in any hard drive backup system: what happens if your system is cracked?

    If someone gets into your system, they do an rm -r *, is your backup drive mounted?

    What if they're clever and do a mount all, or find your backup.sh first?

    I've seen some people take the first and last step of "inserting the USB cable" and "removing the USB cable". Is there any kind of automated system that would ease this, or is it the Hard drive equivelant of "Remove tape, insert new tape".

    USB drives also suffer from problems with catastrophic failure, like a fire in your home.

    I wonder if there exist any online backup systems that let you do offsite daily differential backups of your system (or critical files) that would let you download or mail you an image of your harddrive (on DVD-R) along with restore software in case anything went wrong. You could charge directly by bandwidth used. Hmm, interesting idea.
    • > One flaw in any hard drive backup system: what happens if your system is cracked?

      I've thought of that too. I like to backup my gradebook to another server.

      So, you're asking yourself. What keeps the malicious intruder from logging into the 2nd server after perusing my backup script?

      I used a little-used feature of ssh that allows you to restrict a session to a single pre-specified command. My backup script has only the ability to write new gradebook backups to the server. It cannot execute any ot
  • mondo [microwerks.net] and backuppc [sourceforge.net] (if you're backing up other machines over the network).

    mondo will do a full image of your drives (including making images of ntfs/fat32 drives). You boot off the image you create with mondo, and you can nuke the machine and do a full restore from cd/dvd, or do a partial restore.

    backuppc is perl based and works wonders on a network for daily backups. (you can backup the server backuppc is running on too!)

  • We've slashdotted NewsForge! Judging from the resubmission retries for this post, we've slashdotted Slashdot! That parrot ain't the only inert ex-regurgitator pushin' up the daisies!
  • I have been using KNOPPIX CDs for creating backups for a couple of years now.
    (Linux/win)

    I generally just use partimage, but mc browsed tar archives just fine.

    IN the "early" days there was some brain damage with versions of partimage, but those are long gone.
  • I'm surprised HD vendors don't sell them in pairs, with auto mirroring of a nonspinning backup ready for swapping when they inevitably fail around MTBF (YMMV). Either a second drive, or even integrated as a pair of independent platter sets. They could double their prices per capacity, plus a markup for "extra backup protection", and avoid all the brand-switching anger surrounding the disk failure that's really the admin's fault, if there's no other backup. While justifying their humongous total capacity hig
  • mkcdrec (Score:3, Informative)

    by jonniesmokes ( 323978 ) on Thursday January 13, 2005 @05:44PM (#11353414)
    I recommend using mkcdrec for a bootable DVD or CD to recover the system, and a more frequent backup of the user land data using whatever you like.

    mkcdrec is a really neat program that packs up your whole system and makes a recovery disk. Its something any sysop should take a look at.

    See the homepage [mkcdrec.ota.be] here.

  • Use RAID-1 (Score:3, Informative)

    by Phil Karn ( 14620 ) <`ten.q9ak' `ta' `nrak'> on Thursday January 13, 2005 @07:57PM (#11354821) Homepage
    I use a much simpler and easier method to back up my primary Linux server: software RAID-1. Every month or so, I shut down, pull the secondary drive in the array, put it in the safe, and replace it with either an old drive or a new drive bought at the store. Then I reboot and let the mirror rebuild onto the new drive.

    Because RAID-1 is an exact mirror, I get a complete, bootable backup copy of my system at the time of the shutdown. Downtime is limited to the few minutes it takes to shut down and swap drives. The lengthy process of mirror rebuilding takes place while the system runs normally. And of course, RAID also protects me against random (single) hard drive failures.

    This solves the full image backup problem, leaving only the more frequent partial backups you should also be doing. For this, rsync is your friend. The stuff that changes most often on my system are my IMAP folders, which I periodically (several times per day) rsync to my laptop. Besides backing up my mail server, this gives me copies I can carry around and read when I'm offline.

    Tape is obsolete. It's just too slow, expensive, unreliable and small. Hard drives are so cheap, fast and capacious that there's little excuse to not run RAID on any machine that can physically hold more than one hard drive. Unfortunately, this leaves out most laptops.

"The one charm of marriage is that it makes a life of deception a neccessity." - Oscar Wilde

Working...