Ask Slashdot: Practical Bitrot Detection For Backups? 321
An anonymous reader writes "There is a lot of advice about backing up data, but it seems to boil down to distributing it to several places (other local or network drives, off-site drives, in the cloud, etc.). We have hundreds of thousands of family pictures and videos we're trying to save using this advice. But in some sparse searching of our archives, we're seeing bitrot destroying our memories. With the quantity of data (~2 TB at present), it's not really practical for us to examine every one of these periodically so we can manually restore them from a different copy. We'd love it if the filesystem could detect this and try correcting first, and if it couldn't correct the problem, it could trigger the restoration. But that only seems to be an option for RAID type systems, where the drives are colocated. Is there a combination of tools that can automatically detect these failures and restore the data from other remote copies without us having to manually examine each image/video and restore them by hand? (It might also be reasonable to ask for the ability to detect a backup drive with enough errors that it needs replacing altogether.)"
PAR2 (Score:5, Informative)
http://www.quickpar.org.uk/ [quickpar.org.uk]
http://chuchusoft.com/par2_tbb/ [chuchusoft.com]
ZFS filesystem (Score:5, Informative)
One single cmd will do that,
zpool scrub
Re:Excellent question (Score:5, Informative)
Not all cloud storage is expensive. It's only $4 a month for unlimited backups to CrashPlan.
They also do checksums and versioning and can be set to never remove deleted files from the backup.
I have 12.8TB backed up to them and it's been working great.
Other than that, ZFS can't be beat. I use that as well.
Re:uhuh (Score:2, Informative)
Warning for all UNIX newbies: that command will reset the file to 0 bytes. Just that you know.
(I've seen some cases when a rookie is setting up a Linux system and people jokingly throw him these "rm -rf /" commands and the poor guy actually ends up wrecking his system.)
Re:ZFS filesystem (Score:5, Informative)
Agreed, ZFS does exactly this, though without the remote file retrieval portion.
To elaborate:
http://en.wikipedia.org/wiki/ZFS#ZFS_data_integrity [wikipedia.org]
End-to-end file system checksumming is built in, but by itself this will only tell you the files are corrupt. To get the automatic correction, you also need to use one of the RAID-Z modes (multiple drives in a software raid). OP said they wanted to avoid that, but for this kind of data I think it should be done. Having both RAID and an offsite copy is the best course.
You could combine it with some scripts inside a storage appliance (or old PC) using something like Nas4Free (http://www.nas4free.org/), but I'm not sure what it has "out of the box" for doing something like the remote file retrieval. What it would give is the drive health checks that OP was talking about; this can be done with both S.M.A.R.T. info and emailing error reports every time the system does a scrub of the data (which can be scheduled).
Building something like this may cost a bit more than for just an external drive, but for this kind of irreplaceable data it is worth it. A small atom server board with 3-4 drives attached would be plenty, would take minimal power, and would allow access to the data from anywhere (for automated offsite backup pushes, viewing files from other devices in the house, etc).
I run a nas4free box at home with RAID-Z3 and have been very happy with the capabilities. In this configuration you can lose 3 drives completely and not lose any data.
Re:Checksums? (Score:5, Informative)
I never archive any significant amount of data without first running this script at the top:
find -type f -not -name md5sum.txt -print0|xargs -0 md5sum >> md5sum.txt
It's always good to run md5sum --check right after copying or burning the data. In the past, at least a couple of percent of all the DVDs that I've burned had some kind of immediate data error
(A while back, I rescanned a couple of hundred old DVDs that I burned ranging up to 10 years old, and I didn't find a single additional data error. I think that a lot of cases where people report that DVDs deteriorate over time, they never had good data on them in the first place and only discover it later.)
Re:ZFS filesystem (Score:5, Informative)
You don't need raidz or multiple drives to get protection against corrupt blocks with ZFS. It supports ditto blocks, which basically just means mirrored copies of blocks. It tries to keep ditto blocks as far apart from eachother on the disk as possible.
By default, ZFS only uses ditto blocks for important filesystem metadata (the more important the data, the more copies). But you can tell it that you want to use ditto blocks on user data too. All you do is set the "copies" property:
# zfs set copies=2 tank
Re:ZFS filesystem (Score:2, Informative)
I'm another fan of backups to disks stitched together with ZFS. In the last year I've had two cases where "zfs scrub" started to report and correct errors in files one to two months in advance of a physical hard drive failure (I have it scheduled to run weekly). Eventually the drives faulted and were replaced, but I had plenty of warning, and RAIDZ2 kept everything humming along perfectly while I sourced replacements.
For offsite backups I currently rotate offline HDD's, but I should move to Cloud storage. Give a bit of my surplus space and bandwidth to someone like Symform, and in turn they give me a free little slice of the Cloud to have TrueCrypt archives mirrored into. Win-win!
Re:BTRFS filesystem (Score:5, Informative)
I'll be the heretic here, but on Windows 8.1 and Windows Server 2012 R2, there is a feature called Storage Spaces. It works similar to ZFS where you toss drives into a pool, then create a volume that is either simple, mirror, or with parity, and Windows does the rest. If a volume needs more space, toss some more drives in the pool.
To boot, it even offers autotiering so data can be stored on a SSD that is frequently used, or remain on the HDDs if it isn't. Deduplication is handled on the filesystem level [1].
No, this isn't a replacement for a SAN with RAID 6 and real-time deduplication, but it does get Windows at least in the same ballgame as Oracle with ZFS.
[1]: Not active deduplication. The data is initially stored duplicated, but a background task finds identical blocks and adds pointers. Of course, the made from scratch filesystem, ReFS (which has the ability to check for bit rot on reads like ZFS), doesn't have this, so one is still stuck with NTFS for this feature.
Re: PAR2 (Score:5, Informative)
Use non-LTH BD-R media. It's seriously the best media we've ever had for long-term archival storage, hands-down, no contest. Unlike DVD+/-R, it's phase-change magneto-optical WORM... the laser liquefies the plastic, the magnet orients little shiny planar mirrors, the plastic solidifies, and the bits are about as close to 'carved in stone' as you're likely to ever get. As a technology, it's not cheap... but it definitely minimizes the number of things that can go wrong over a ~25-year timeframe:
* decouples media from its player... the achilles heel of hard drive-based backup schemes. A broken hard drive means a spectacularly expensive data-recovery job. A broken BD drive means buying a new one.
* phase-change MO media doesn't bleach or darken with age... and if it's going to delaminate or anything (like early optical discs often do), it's overwhelmingly likely to happen sooner rather than later (while you still have the originals available to re-archive if necessary).
* I think we can safely accept that future evolution to optical discs will remain downwards-compatible with reading older media. Seriously, CDs are THIRTY YEARS OLD, and any Blu-Ray player from China can still play them just fine (plus everything that's ever been commonly burned/stamped into them). A 2037 Apple Eve might have the masses drooling over its legacy-free minimalist purity, but the rest of us will have a 600 petabyte optical drive manufactured by a sweatshop in Uganda or Haiti that can read old BD-R discs just fine (at least, after opening it up and soldering a wire across two pads on the circuit board to make it think it's supposed to be their $6,000 enterprise version instead).
Re:BTRFS filesystem (Score:4, Informative)
The only way to truly prevent bitrot is by maintaining at least three complete copies of the data, and regularly compare between them.
There you go again. Acting like you know what you're talking about, but you don't.
ZFS and BTRFS have a much more efficient way to ensure correctness: CRC [wikipedia.org] of everything written. That is what is checked when you do a zpool scrub or a btrfs scrub. Random errors are very unlikely to produce the same checksum, so then you only need a second copy that doesn't produce CRC errors.
Hard drives are nowhere near as reliable [acm.org] as their manufacturers claim. Modern drives don't store the bits that you feed them exactly as you give them. Instead, they use CRC and error correcting codes, [arstechnica.com] so they only need most of the data to be correct. Usually, if the data doesn't match the CRC, and it cannot be corrected by ECC, then you get a read error instead of corrupted data. Which, I guess, is better than getting a corrupted picture. Ideally, a RAID would be able to recreate the missing block, but I can't find any reference to a RAID doing that.
But I've seen enough errors that I suspect something else is going on. It surely doesn't help that modern computers have many gigabytes of memory, but almost none have ECC on that memory. Your computer can be corrupting your data, and you have no warning that it's happening. In addition, hard drives lie. [acm.org] I'm not optimistic about the long-term storage of electronic data.
Re:BTRFS filesystem (Score:3, Informative)
RAID10 and similar systems are two RAID5 systems which are independent and regularly compare data; These can detect which system is inconsistent, so you will always have at least one copy of your data in a consistent state.
You were doing quite well up until you said that sentance .....
Re: PAR2 (Score:4, Informative)
EEPROM also happens to be the ancestor of SLC flash, not MLC, TLC or worse.
Flash is like a leaky bucket that starts out full of water, and gets drained to some level when a cell's value is set:
SLC == "The bucket is either totally empty (0), or has some water in it (1)"
MLC == "The bucket can be totally empty (00), non-empty to ~33% full (01), 33%-~66% full (10), or 66-100% full (10). After 1/3 the water leaks out, the cell's value is corrupt.
TLC == same idea as MLC, but the bucket has EIGHT levels instead of four. Do the math to figure out how much metaphorical water can leak out before the cell's value becomes corrupted.
BIOS eeproms are also a larger process than high-density flash, so the buckets themselves are larger while the leaks remain relatively constant in size. In other words, you're comparing a metaphorical 55 gallon drum with a slow drip that has to be completely empty to change from 1 to 0 to a thimble with 8 tick marks on the side and a leak of the same size.
Re:A paranoid setup (Score:4, Informative)
good post, except for three details:
1. if you're using ZFS on both systems, you're *much* better off using 'zfs send' and 'zfs recv' than rsync.
do the initial full copy, and from then you can just send the incremental snapshot differences from then on.
one advantage of zfs send over rsync is that rsync has to check each file for changes (either file timestamp or block checksum or both) every time you rsync a filesystem or directory tree. With and incremental 'zfs send', it only sends the incremental difference between the last snapshot sent and the current snapshot.
you've also got the full zfs snapshot history on the remote copy as well as on the local copy.
(and, like rsync, you can still run the copy over ssh so that the transfer is encrypted over the network)
2. your price estimates seem very expensive. with just a little smart shopping, it wouldn't be hard to do what you're suggesting for less than half your estimate.
3. if you've got a choice between hardware raid and ZFS then choose ZFS. Even if you've already spent the money on an expensive hardware raid controller, just use it as JBOD and let ZFS handle the raid function.