Ask Slashdot: Asynchronous RAID-1 Free Software Backup For Laptops? 227
First time accepted submitter ormembar writes "I have a laptop with a 1 TB hard disk. I use rsync to perform my backups (hopefully quite regularly) on an external 1 TB hard disk. But, with such a large hard disk, it takes quite some time to perform backups because rsync scans the whole disk for updates (15 minutes in average). Does it exist somewhere a kind of asynchronous RAID-1 free software that would record in a journal all the changes that I perform on the disk and replay this journal later, when I plug my external hard disk on the laptop? I guess that it would be faster than usual backup solutions (rsync, unison, you name it) that scan the whole partitions every time. Do you feel the same annoyance when backing up laptops?"
mdadm can do this (Score:5, Informative)
Re: (Score:3)
I am going to test this on my next laptop, or if I decide to upgrade my current with an SSD some day.
Meanwhile, I do have a couple of questions. How automated is this going to be? Will it automatically start to sync, once the USB/eSata disk is connected?
Can I safely attach that disk to another computer for reading? I am worried such operation might corrupt data, even if I don't write a
Re:mdadm can do this (Score:5, Informative)
Effectively you create a RAID 1 mirror. When you remove the external drive the RAID degrades. The raid bitmap keeps track of changes. When you plug the external drive in you just have to tell it to bring it up to date. Which syncs the only changes.
Re: (Score:2)
Re: (Score:3)
Seriously? If the drive in the laptop fails, it has failed in any scenario. It doesn't matter what strategy you use to back up. You are looking at installing a new one and copying the backup in any event. In any backup scenario you have to do an added trick with grub to copy the boot sector to the second drive. Then all you have to do to recover is pop a new drive in the laptop and dd the backup drive to the new drive, boot sector, partition table, file system, files and all.
Re:mdadm can do this (Score:5, Interesting)
Re:mdadm can do this (Score:4, Informative)
Obligatory (Score:5, Informative)
RAID is not backup.
Re:Obligatory (Score:5, Informative)
True. I'd recommend he check out rdiff-backup, which keeps snapshots of previous syncs. Fantastic tool.
Re: (Score:3)
RAID is not backup.
It is in this situation since he wants to mirror to an external disk , then break the mirror and unplug the disk.
It's no worse than if he does "rsync --delete" to the backup medium. (well ok, slightly worse since if the mirror fails in the middle, the backup disk is left in an inconsistent state and could be unreadable, but the rsync would also leave an unknown number of files/folders unsynced, so it's not a perfect backup itself)
As long as you have more than one backup disk, then a mirror is as safe as rs
Re: (Score:3, Informative)
Re: (Score:3)
Just because you've hacked RAID into part of a backup strategy does not mean that backup is a standard use-case for RAID. It's far too easy for the wrong disk to get overwritten because of all the things RAID is set up to do by default. With rsync, you're telling the disks exactly which direction the data needs to flow.
In a production environment, there's also a greater chance of failure using RAID because of the whole "plugging / unplugging drives" thing. Sure, it's rare, but your operating system and/or motherboard may or may not enjoy having drives attached and detached from its SATA bus.
Hearing the above, a systems administrator would assume you're confused between the terms "backup" and "mirror". It's a non-standard use-case, so the admin that arrives after you've moved on to another job will have to deal with that confusion.
My RAID backup strategy was fully supported and recommended by the manufacturer of the storage array, and was a big selling point. It wasn't a hack. Even tape backups can suffer problems from overwriting the wrong tape if someone does something stupid. "Oh hey, the backup system says this tape isn't expired yet, I'm sure I loaded the right tape, so I'll just do a hard-erase so I can write to it"
Re: (Score:3)
My RAID backup strategy was fully supported and recommended by the manufacturer of the storage array, and was a big selling point. It wasn't a hack. Even tape backups can suffer problems from overwriting the wrong tape if someone does something stupid. "Oh hey, the backup system says this tape isn't expired yet, I'm sure I loaded the right tape, so I'll just do a hard-erase so I can write to it"
Here's a Sun/Oracle doc that explains the procedure:
http://docs.oracle.com/cd/E19683-01/817-2530/6mi6gg886/index.html [oracle.com]
How to Use a RAID 1 Volume to Make an Online Backup
You can use this procedure on any file system except root (/). Be aware that this type of backup creates a “snapshot” of an active file system. Depending on how the file system is being used when it is write-locked, some files and file content on the backup might not correspond to the actual files on disk.
The following limitations apply to this procedure:
* If you use this procedure on a two-way mirror, be aware that data redundancy is lost while one submirror is offline for backup. A multi-way mirror does not have this problem.
* There is some overhead on the system when the reattached submirror is resynchronized after the backup is complete.
Re: (Score:2)
You're getting way too detailed to be implicating the highly generic term RAID in your list of fault conditions.
I will point out however that your premise that a system won't know which way to sync the data is wrong. Any running RAID implementation that syncs from a recently attached disk to the currently in use disk is just broken and would never get out of QA.
However using a RAID1 to mirror to an external drive isn't going to be a particular benefit unless the raid implementation manages a changed block m
Re: (Score:2)
It is, if you then disconnect half of it and move it offsite! I'm not sure that's the best way to do backups, though.
If I were this guy, I'd look into why it takes rsync so long to read the dir tree. This is one of those situations where no matter how much people say "Linux filesystems don't suffer from fragmentation," I nevertheless suspect you're suffering from highly fragmented directories. Let me guess: do you repeatedly come close to filling the disk? Maybe it's time to do this: after the next rsyn
ZFS: Snapshot + send (Score:2, Interesting)
Cleanest implementation of this I've seen is with ZFS.
You do a snapshot of your filesystem, and then do a zfs send to your remote backup server, which then replicates that snapshot by replaying the differences. If you are experiencing poor speed due to read/write buffering issues, pipe through mbuffer.
The only issue is that it requires that you have your OS on top of ZFS.
Exclude directories (Score:5, Informative)
Are you backing up EVERYTHING on the laptop -- OS and data included? Even if you are only backing up your home directory there is stuff you don't need to backup like the .thumbnails directory which can be quite large. Try using rysnc's exclude option to restrict the backup to only what you care about.
DNA
AKA mrascii
COW or desync'ed RAID (Score:5, Informative)
In this case, it sounds like you want a fast on-demand sync rather than a RAID.
However, you could possibly use dm-raid for this if you're a linux user.
Have the internal disk(s) as a degraded md-raid1 partition. When you connect the backup disk, have it become part of the RAID and the disks should sync up. That said, it likely won't be any faster than rsync, quite possibly slower as it'll have to go over the entire volume.
Alternate solutions: /home is a combination of /mnt/home-ro (ro) and /mnt/home-rw (rw, COW filesystem). When external media is connected, /mnt/home-rw is synced to external media, then back over /mnt/home-ro
* Have a local folder that does daily syncs/backups. Move those to the external storage when it's connected.
CAVEATS: Takes space until the external disk is available
* Use a differential filesystem, or maybe something like a COW (copy-on-write) filesystem. Have the COW system sync over to the backup disk (when connected) and then merge it into the main filesystem tree after sync
For example,
OS? (Score:5, Insightful)
Re: (Score:2)
Re: (Score:2)
Robocopy doesn't keep the ACM dates across volumes. So it is certainly not a 1:1 copy.
The only thing that comes close, but still not there completely, is the legacy MS (Veritas) backup utility. And that one is far from automated.
Re: (Score:2)
Robocopy doesn't keep the ACM dates across volumes. So it is certainly not a 1:1 copy.
The only thing that comes close, but still not there completely, is the legacy MS (Veritas) backup utility. And that one is far from automated.
What about SyncToy? [microsoft.com] Seems to work pretty well, at least it does for me.
Re: (Score:2)
Robocopy doesn't keep the ACM dates across volumes. So it is certainly not a 1:1 copy.
Maybe I'm misunderstanding you, but robocopy does keep dates across volumes. You can also control whether or not you want to copy them. File times are copied by default and for directory you add the DCOPY:T parameter. Are you speaking of some other underlying file system date?
CrashPlan (Score:4, Informative)
CrashPlan [crashplan.com] is free, but not open, and I think will do everything you need. You can backupto an external disk, over the network to one of your own machines, or back up to a freind who also runs it. Great key based encryption support. If you want, you can pay them for offsite backups (which is a great deal as well, in my opinion). It's cross-platform, and easy to use. Never underestimate the benefits of off-site backups.
Re: (Score:2)
For the last month, I've been using CrashPlan to back up a 5.5TB filesystem over AFP to a remote AFS file share over the Internet. I did the initial backup across the LAN and then moved the drive array to its final destination. I'm now a few weeks in after the move and for the last 4 days, it has not backed up and is instead synchronizing block information. 4 days in and it's up to 59.9%. It spent 1.5 days about a week ago doing something like recursive block purging. I wish the client could do these housek
Re: (Score:2)
That is a long time. I think I had something similar when I 'adopted' a backup. Once it's in sync the backups are quite quick, with pretty much no 'start-up scan time'.
Re: (Score:2)
Excellent. Thank you!
Just use Windows Backup (Score:4, Insightful)
Windows Backup (since Vista) use Volume Shadow Copy (VSS) to do block level reverse incremental backup. I.e. it uses the journaling file system to track changed Blocks and only copies over the changed Blocks.
Not only that, it also backs up to a virtual harddisk file (VHD) which you can attach (Mount) as a seperately. This file system will hold the complete history, i.e. you can use the "previous versions" feature to go back to a specific backup of a directory or file.
Re: (Score:2)
Lots of backup software uses VSS, pretty much any credible backup software on windows. It totally lacks automation, which is a pretty big downside.
I doubt he is using windows, since he mentions rsnapshot.
Re: (Score:2)
It totally lacks automation, which is a pretty big downside.
wbadmin.exe [microsoft.com] is available since Vista (where the VSS based image backup was introduced).
How is that totally lack of automation?
Re: (Score:2)
You can use it for automation sure, but out of the box it does not do any. Nearly no windows user will know how to use that. It would need a shiny wizard and other mythical figures to do that for you.
My personal favorite is to have bacula do it, that is even less end user friendly though. It does mean all the schedules live on the server not the client, which is nice.
Re: Just use Windows Backup (Score:2)
Home versions of windows don't support scheduled backups. You might be able to hack something yourself using task scheduler and a batch file though.
Re: (Score:3)
Home versions of windows don't support scheduled backups. You might be able to hack something yourself using task scheduler and a batch file though.
No, that is not correct.
At least in Windows 7 *all* editions [microsoft.com] have the full image capability. Only the professional/enterprise editions can backup to a *network* drive. But in this case it is a local or attached disk, so the edition really does not matter.
Re: (Score:3)
Re:Just use Windows Backup (Score:4, Interesting)
Unless you're running Windows 8 or Server 2012, Windows Backup on Windows 7 and below is functionally obsolete due to the new 3TB + drives now in 4k sector Advanced Format technology.
Nice. So because you can buy large-capacity drives that immediately would "functionally obsolete" backup solutions even if a system does not have such a drive? Tell me, did you buy a new BMW when apple changed the connector for iPhone 5? You know, the old BMW are now "functionally obsolete".
Not that it matters much here anyway, because you got it wrong. Windows backup *will* backup to drives larger than 3TBs - as long as they use the 512e advanced formatting where it logically uses 512 bytes allocation units but physically 4096 bytes units. The solution is to use the GPT (GUID Partition Table) format. This will work for Vista and up.
The drives that are exclusively 4096 cannot be used with Windows 7 / Server 2012 - that's a limitation of the OS and not the backup software, however.
Whooosh (Score:4, Interesting)
Holy cow people, your missing the OP point. It's taking 15 minutes to SCAN the 1TB drive.
I've run into the same problem on windows and Linux. Especially for remote rsync updates on Linux on slow wireless connections. It's not the 1TB that kills since I can read 4TB drives with hundreds of movies in seconds. It's the amount of files that kill performance.
My solution on windows is to take some of the directories with 10,000 files and put them into an archive (think clipart directories). Zip, Truecrypt, tar, whatever. This speeds up reading the sub-directories immensely. Obviously, this only works for directories that are not accessed frequently. Also, FAT32 is much faster on 3000+ files in a directory than NTFS is. Most of my truecrypt volumes with LOTS of files are using FAT32 just because of the directory reading speed.
On Linux systems, I just run rsync on SUB-directories. I run the frequently accessed ones more often and the less-accessed directories less often. Simple, No. My rsyncs are all across the wire, so I need the speed. Plus some users are on cell-phone wireless plans, so need to minimize data usage.
Re: (Score:2)
Yup almost everyone missed the point of having to deal with shitty File Systems.
Agreed about using the "dumb" FAT32 FS for speedy access!
It's too bad you couldn't load the FS meta-info into a RAM drive, or onto a SSD, kind of like how ZFS gives you the option with the ZIL on SSD.
Re: (Score:2)
Agreed. My first thoughts were ZFS, but with the laptop I figured it was more-than-likely a windows box. Plus I wouldn't use BSD on a laptop either and I don't quite trust ZFS on Linux yet...(but it's getting close). Also agree on the ZIL on SSD. I can keep quite a few VMs (websites) in cache on the SSD and hardly have to worry about the speed of the HDs. Plus backups from the filesystem level. One of those tools I can't believed I've lived without all these years.
Re:Whooosh (Score:4, Informative)
My solution on windows is to take some of the directories with 10,000 files and put them into an archive (think clipart directories).
I hope your are not an IT professional. Windows comes with a perfectly good backup solution built-in. It will use Volume Shadow Copy Service (VSS) to track changes as they occur and subsequently only do backup of the changes blocks. No need to scan *anything* as the journaling file system has already recorded a full list of changes in the journal.
The backup is basically stored in a VHD virtual harddisk (and some catalog metadata around it), so you can even attach the VHD and browse it. It will by default let you browse the latest backup, but the previous versions feature will let you browse back in time to any previous backup still stored in the VHD (oldest backups vill be pruned from the backup when the capacity is needed). The VHD is a inverse incremental backup because it stores the latest backup as the readily available version and only the incremental (block level) differences between previous backup sets.
Moreover, VSS also ensures persistent consistency for a lot of applications that are VSS aware (VSS writers), i.e. database systems like Oracle, SQL Server, Active Directory, registry etc. VSS coordinates with the applications so that exactly when the snapshot is taken, the applications ensure that they have flushed all state to disk. This means that applications will not need to be stopped to get a consistent backup, i.e. database systems will not see a restore of a backup that was taken from a running system as a "crash" (as they would without such a service) from which they must recover through some other means (typically a roll-forward log).
Re: (Score:2)
That takes longer since the find command scans the entire directory and file structure to find the directories. It also takes longer because of querying the size takes more than just querying the name. I just used rsync to scan some of the directories hourly (accounting data, document directories, etc). Other directories were daily, and others were only monthly (install directories, tools, etc). I had to force the users into a certain file hierarchy, but that's what sys admins are for :)
Do it on a lower level. (Score:3)
I'd think to use LVM and filesystem snapshots. The snapshot does the trick of journaling your changes and only your changes. You can ship the snapshot over to the backup volume simply by netcat-ing it over the network. The backup's logical volume needs to have same size as the original volume. It's really a minimal-overhead process. Once you create the new snapshot volume on the backup, the kernels on both machines are essentially executing a zero-copy sendfile() syscall. It doesn't get any cheaper than that.
Once the snapshot is transferred, your backup machine can rsync or simply merge the now-mounted snapshot to the parent volume.
Re:Do it on a lower level. (Score:4, Informative)
Well, of course I goofed, it's not that easy (well it is, read on). A snapshot keeps track of what has changed, yes, but it records not the new state, but the old state. What you want to transfer over is the new state. So you can use the snapshot for the location of changed state (for its metadata only), and the parent volume for the actual state.
That's precisely what lvmsync [github.com] does. That's the tool you want to do what I said above, only that it'll actually work :)
Will your backup have you backup up and running? (Score:2)
If you are spending time messing with a system that is not going to provide you with a running computer after a quick trip to the store for a new hard drive, then maybe you should rethink your goals.
And perhaps you would regret the time spent less if you knew that in the event of an emergency, your backup would not only save your data, but prevent a re-installation and updates and more updates and more updates, and hunting for installation media and typing in software keys.
AIX had/has a nice system for back
Step 1 get a real backup (Score:2)
Making a mirror every now and again is not a backup strategy to use. This is the canned RAID is NOT a backup and never will be advice. For a single laptop something like backblaze is probably a better bet.
Upgrade your rsync! (Score:5, Informative)
You're holding it wrong. ;)
rsync 2.x was horribly slow as it would scan the entire source looking for changed files, build a list of files, and then (once the initial scan was complete) would start to transfer data to the destination.
rsync 3.x starts building the list of changed files, and starts transferring data right away.
Unless you are changing a tonne of files between each rsync, it shouldn't take more than a few minutes using rsync 3.x to backup a 1 TB drive. Unless it's an uber-slow PoS drive, of course. :)
We use rsync to backup all our remote school servers. Very rarely does a single server backup take more than 30 minutes, and that's for 4 TB of storage using 500 GB drives (generally only a few GB of changed data). And that's across horrible ADSL links with only 0.768 Mbps upload speeds!
Going disk-to-disk should be even faster.
Re: (Score:2)
ZFS - incremental/snapshot? (Score:5, Informative)
two pools, internalPool, externalPool
use ZFS send and receive to migrate your data from internal to external, you and do whole fs or incremental if you keep a couple of snaps local on your internal disk, this can get excessive if you have a lot of delta or you want a long time.
http://docs.oracle.com/cd/E18752_01/html/819-5461/gbchx.html [oracle.com]
of course you will need a system that can use ZFS, there are more options for that than time machine, its block level and its fast, and it doesn't depend on just one device, you can have multiple devices (I like to keep some of my data at work, why? my backup solution is in the same house that would burn, if it burned...)
Re: (Score:2)
Very nice suggestion about using two pools !
>of course you will need a system that can use ZFS
Actually I was suprised how well "ZFS on Linux" works if you don't have a FreeNas/BSD system.
* http://zfsonlinux.org/ [zfsonlinux.org]
It is too bad the ZFSonLinux documentation is total garbage but at least it was relatively painless to get it to work on a spare Ubuntu box. IIRC, ZFS on Linux setup was ...
Re: (Score:2)
I declare you the winner.
Re: (Score:2)
Uhh... Why the rush? (Score:2)
I usually hate making posts where I am questioning the questioner, rather than providing an answer but with 1 TB of information you should put on the patience cap. It will take as long as it takes.
To break down what you are wanting:
I want a backup based on a journal file system sorta of thing that works incrementally slowing down every disk operation by a few milliseconds so I can shave 15 minutes off of a backup procedure, but I still have to send the same data. I don't think that would be very wise. The b
Btrfs send/receive (Score:5, Informative)
Btrfs send/receive should possible be doing the trick. After first cloning the disk and before every subsequent transfer create a reference-snapshot on the laptop and delete the previous one after the transfer.
$ btrfs subvolume snapshot /mnt/data/orig /mnt/data/backup43 /mnt/data/backup42 /mnt/data/backup43 | btrfs receive /mnt/backupdata /mnt/data/backup42
$ btrfs send -p
$ btrfs subvolume delete
I havn't tried this for myself, so the necessary disclaimer: this may eat your disk or kill a kitten ;-)
Btrfs send & receive (Score:3)
Btrfs has tools for doing this. It also comes with find-new that allows to find exactly which files have been changed between snapshots, and it does it basically instantenously.
Though Btrfs might not be the solution for ensuring data integirity at this point.. But setting up hourly snapshots of your drives can be quite nice when you accidentally destroy something you've created after the last backup.
Re: (Score:2)
>Though Btrfs might not be the solution for ensuring data integirity at this point.
its certainly close though. it also has bunch of data integrity features (like checksumming) that will make it far safer than ext (and most other filesystems apart from zfs). if you have slightly dodgy hardware btrfs will let you know, whereas your data may silently corrupt on ext4.
iFolder (Score:2)
I use iFolder for this. It has clients for Windows, Linux, and Mac platforms, and works reasonably well. The server was a bit of a pain to get set up though. It used to be a Novell product but has spun off as its own open source project. You can check it out at ifolder.com
IBM's TSM (Score:2)
Real backup and since 6.3 does journal based backups for Ext2, Ext3, Ext4; XFS, ReiserFS, JFS, VxFS, and NSS.
The other option I have seen (surprisingly for GPFS as TSM does not do journal based backups for GPFS even though both are IBM products) is to register to the DMAPI (this would only work for XFS I think) and then use that to capture all activity on the file system. You could then use that to generate your list of files to backup. Admittedly this is going to require you to get your hands dirty and do
Re:TimeMachine (Score:5, Insightful)
Wouldn't solve his problem. TimeMachine takes considerable time to prep and start a backup before it starts actually doing any work, I'd guess its likely doing the same sort of thing that Rsync, gathering a list of changes.
Re: (Score:2)
Part of why Time Machine takes so long is that it has to create hard links to every file that hasn't changed. If you look in your backup folder, there's a list of dated directories. Each one is a complete image of your hard drive at the time of the backup. The deduplication comes from hard links, so you can delete older backups directly from the filesystem without messing anything up. While it does take time, I really wish more backup systems were set up that way.
Re: (Score:2)
Part of why Time Machine takes so long is that it has to create hard links to every file that hasn't changed.
No, it does not. If you read John Siracusa's excellent OS X Leopard review... oh, fuck it. Just read my reply to a sister comment of yours [slashdot.org].
tl; dr: FSEvents and hard links to directories.
Re: (Score:2)
Wouldn't solve his problem. TimeMachine takes considerable time to prep and start a backup before it starts actually doing any work, I'd guess its likely doing the same sort of thing that Rsync, gathering a list of changes.
AFAIK, Time Machine is a GUI frontend for rsync. Watch Activity Monitor.app when it fires up. That will tell you. I don't use Time Machine, personally, I know how to use rsync.
Re: (Score:3)
AFAIK, Time Machine is a GUI frontend for rsync. Watch Activity Monitor.app when it fires up. That will tell you. I don't use Time Machine, personally, I know how to use rsync.
No, Time Machine is NOT a frontend for rsync. Yes, you can achieve something that resembles Time Machine by using the --link-dest option.
I use rsync --link-dest regularly through a script called tym ("Time rsYnc Machine") to backup stuff on systems at work for which I don't have admin privileges to configure Time Machine (oh, I haven't done it in a few weeks, I should do it asap!). So I know it has some drawbacks compared to TM, the main two being:
Re: (Score:2)
Wouldn't solve his problem. TimeMachine takes considerable time to prep and start a backup before it starts actually doing any work, I'd guess its likely doing the same sort of thing that Rsync, gathering a list of changes.
No, it doesn't. It only takes a considerable amount of time to prep if you haven't backed up in many days. If you have backed up recently the prep time is quite short. And if you use the default configuration (in which it backs up every hour) the prep time is almost nil.
If you read John Siracusa's excellent OS X Leopard review [arstechnica.com] you will find that Time Machine avoids traversing the whole hierarchy because it taps into FSEvents [arstechnica.com] which keeps a record of the files that have been modified since the last backup.
Re: (Score:2)
Does it destroy the old backup or create a new folder? I realize you might want to delete the old backup to save space when it does this, but you wouldn't have to do it.
I wish there was something more like Time Machine for Windows and Linux - especially the part where there's dated directories with hard links back to the original revision of the files.
Re: (Score:3)
I wish there was something more like Time Machine for Windows and Linux - especially the part where there's dated directories with hard links back to the original revision of the files.
As far as I'm aware, the "File History" feature in Windows 8 will do this, and it's much more granular than what was sort of "built in" by the "Previous Versions" tab on a file or folder's properties. However with it set up properly, even the "Previous Versions" feature that dates back to at least Vista (if not XP SP3, I don't recall off hand) will provide you with exactly what you're asking for though: browseable point-in-time snapshots of your files/folders.
One of the things that piqued my interest in M
Re:find & diff (Score:4, Insightful)
How is traversing the whole directory tree with find different from what rsync does?
Running a daemon that lists modified files using inotify might work.
Re:find & diff (Score:5, Informative)
It's different in that you don't have to sit and wait for it and doing the backup will consist of only the actual copying
I suggest you look again at rsync. /proc and there are some directories such as /tmp and /run which you may not care about).
- It compares changed files and copies only what has been changed. Changed files are identified by differing mtimes (by default).
- rsync can also handle removed files with the --delete option.
- It can do the entire filesystem tree in a single command
- There are filter options so you can include/exclude what paths to copy (eg you don't want to copy
Re: (Score:3)
A snippet:
[more rotations above]
if [ -d $BACKUP_DEST/$(basename $i)/increment.0 ]; then
cp -al $BACKUP_DEST/$(basename $i)/increment.0 $BACKUP_DEST/$(basename $i)/increment.1
fi
rsync -av --delete --exclude-from="$EXCLUDE_LIST" $i/ $BACKUP_DEST/$(basename $i)/increment.0/
touch $BACKUP_DEST/$(basename $i)/increment.0
done
Re: (Score:3)
Why don't you try "--link-dest". It's pseudo-incremental, that is: unchanged files are hardlinked to the previous backup, meaning that there's no space or bandwidth consumption for unchanged files, but each day's replica is a full backup.
Re: (Score:2)
However, he'll want to keep in mind that, depending on his environment, he may have some other issues. For instance, I'd like to use it at work, but I can't because file access times are important to us, and rsync changes the access times on the source files. Last I checked, there was no option to make it stop that, so I'm stuck with tar.
Re:find & diff (Score:4, Interesting)
Just curious, why do you require access time? I set 'noatime' on all partitions.
Re: (Score:3)
Rsync copies only changed files. The time-consuming part is reading all directories in the directory tree.
Re: (Score:2)
Stating the files on filesystems that require that is usually orders of magnitude more time consuming than the actual directory reading. IMO, filesystems should store mtime in the directory entry and readdir calls should return it.
Re: (Score:2)
Re: (Score:2)
That will still take ages...
Why not give Bittorrent Sync a go? It's a decentralized "dropbox" on steriods!
http://labs.bittorrent.com/experiments/sync.html [bittorrent.com]
Re: (Score:2)
Did you even read the title of the submission. He wants FOSS.
Re: (Score:2)
when has a typical user ever planned ahead?
Re:Time Machine (Score:4, Informative)
TimeMachine takes about 15 minutes to do the prep work before it starts copying for me, on a 2012 Retina MBP with 16Gb of RAM and only 256GB of disk space ... 64 GB taken by an unbacked up BootCamp part and another 120 or so eaten in Windows VMs that don't get backed up either ... i.e. Its not a slow spinning platter backing up a terabyte of data.
I see no indication of any Journal, it certainly isn't making it faster. Pretty freaking slow actually.
Re:Time Machine (Score:4, Interesting)
This doesn't match my experience. Time Machine fires up in the background, does its thing, and then stops shortly thereafter. Certainly much less than 15 minutes. More like five or less. This is on a new-ish iMac with a 3TB internal drive.
It wouldn't even be noticeable were it not for the fact that I can hear the TM destination drive (sitting on a shelf behind me) spin up once an hour.
Re: (Score:2)
Sorry, internal drive is 2TB. Time Machine destination is 3TB.
Re: (Score:3)
TM could be doing 15 minutes of work on your own HD before it bothers spinning-up the external, you realize.
You may be correct, but your evidence doesn't match your assertion.
Re:Time Machine (Score:4, Interesting)
This is my current experience with mine too. However during the prep stage it is making room on my time machine drive to receive the changes. Consolidating the older files will take time.
When my drive was new and had plenty of space, the prep stage was much shorter.
Re: (Score:2)
TimeMachine takes about 15 minutes to do the prep work
Yes, because naturally he's using a ma. He must have certainly been in a Starbucks or Panera when he posted as well. Around the Bay area nonetheless.
Re: (Score:3)
Re: (Score:2)
I backup a 2012 MacBook Air every evening to a 1TB 5400RPM USB drive - plug it in, it detects it, and the backup is done in 3 minutes.
Re: (Score:2)
The Airports are really flaky for TM backup. The bottleneck I've seen with them is that they just quit working and need to be reset. Even over Ethernet.
Re: (Score:2)
Re: (Score:2)
Time Machine keeps an event store journal of changes, the process is described at How Time Machine Works its Magic [pondini.org]. What you're describing might be a "deep scan" pass. It's also possible you're touching a lot of directories with updates, which makes the optimization they apply not as useful.
There are cases where the event store makes Time Machine backups nearly instant, which is never the case for rsync based approaches being complained about here.
Re: (Score:2)
TimeMachine takes about 15 minutes to do the prep work before it starts copying for me
Well with my 2012 non-Retina MBP with a 1TB disk it only takes a few minutes at most. I guess those extra screen pixels must really slow it down! ;-)
Re: (Score:2)
As a mac owner, I'm sure you realize that mention of it being Retina is only related to Apple not using model numbers in almost all of their documentation and sales pages (except maybe in the fine print).
Re: (Score:2)
I think the slowdown is from the hard links it creates in the backup directory on the external drive. That takes a lot of time. Every file that's changed gets written to the backup directory as a new file. Every file that hasn't changed gets written as a hard link to the inode of the original backup of that file. So if you have 200,000 files, and 10 of them changed, you still have to write 200,000 entries for the backup.
Still - I don't ever see 15 minutes. I'm curious what's causing your problem and wo
Re: (Score:2)
TimeMachine takes about 15 minutes to do the prep work before it starts copying for me, on a 2012 Retina MBP with 16Gb of RAM and only 256GB of disk space ... 64 GB taken by an unbacked up BootCamp part and another 120 or so eaten in Windows VMs that don't get backed up either ... i.e. Its not a slow spinning platter backing up a terabyte of data.
I see no indication of any Journal, it certainly isn't making it faster. Pretty freaking slow actually.
To what are you backing up and how much data do you generate in a backup interval? It sounds like you're backing up to a network storage device on a wireless network or just a SLOW network, OR you are generating 100s of megabytes if not gigabytes of data during a backup interval. Basically, something is either very wrong or you are a data hog for an SSD equipped machine to backup that slowly.
Re: (Score:2)
Considering how slow it is I doubt it.
I sometimes use a Mac, I still prefer rsnapshot over some backup that is likely hard to deal with if you don't have another mac.
Re: (Score:2)
even on a triple boot. It does not work on HFS+ volumes that have been used by 10.4, or OS 9.
Time Machine is useless to me and my client...so your primise is faulty.
No, you mean buy a RECENT Macintosh.
Re: (Score:2)
Yes, something more recent than 2004.
What are you doing that means you need to keep OS 9 and machines older than 9 years running?
(FWIW, I don't think the OP has this problem, if he's got a laptop with a 1TB internal disk.)
Re: (Score:3)
Welcome to the future. We can even use variable-width fonts now.
Re: (Score:2)
Have you heard of the internet?
It is super cool, you can leave the data in your datacenter and get to it from anywhere! You can even show the customer right on the server instead of dealing with your laptop and a painfully slow USB connection.
Re: (Score:2)
How does a painfully slow USB connection compares with a painfully slow Internet connection?
Re: (Score:3)
You don't transfer anywhere near as much data over it.
You leave that on the server and use the internet just for the nice cheap display.
Re: (Score:2)
Specifically how DRBD handles recovery after an outage of the replication network [drbd.org]. The situations where the disk isn't plugged in will look just like the network outage scenario DRBD handles. I'm not sure whether this will be more or less efficiency than the mdadm bitmap approach outlined above, but those are the two main ways people do this specific operation.
Re: (Score:3)
Do you modify all your research work from the last 20 years? If not, exclude it from backup, since you already have it backed up and are not changing it.