Ask Slashdot: Practical Bitrot Detection For Backups? 321
An anonymous reader writes "There is a lot of advice about backing up data, but it seems to boil down to distributing it to several places (other local or network drives, off-site drives, in the cloud, etc.). We have hundreds of thousands of family pictures and videos we're trying to save using this advice. But in some sparse searching of our archives, we're seeing bitrot destroying our memories. With the quantity of data (~2 TB at present), it's not really practical for us to examine every one of these periodically so we can manually restore them from a different copy. We'd love it if the filesystem could detect this and try correcting first, and if it couldn't correct the problem, it could trigger the restoration. But that only seems to be an option for RAID type systems, where the drives are colocated. Is there a combination of tools that can automatically detect these failures and restore the data from other remote copies without us having to manually examine each image/video and restore them by hand? (It might also be reasonable to ask for the ability to detect a backup drive with enough errors that it needs replacing altogether.)"
Pars (Score:0, Insightful)
Should have parred your data to begin with.
Re:Excellent question (Score:3, Insightful)
Bitrot is a myth in modern times. Floppies and cheap-ass tape drives from the 90s had this problem, but anything reasonably modern (GMR) will read what you wrote until mechanical failure.
The key therefore is to verify as you write. Usually, verifying a sample of a few GB will let you know if everything went OK. DO your backups with checksums of some sort. A modern tape drive and backup software will do that automatically, and let you schedule a verify automatically as part of backups (2 TB? That's 1 tape - might want to consider that), though ideally you should verify a tape on a different drive than the one you wrote it on.
For disk-based backups, local or cloud, I strongly recommend archiving to a format with checksums (RAR etc) over some sort of raw file copy. Especially for anything going over the network: RAR a volume/file set locally first, then upload, then test the archive.
If you have a superstitious fear of bitrot, you can always do some random sampling of archive integrity, and keep multiple historical copies of files just in case (e.g., don't just delete backup N-1 when you do backup N, do a rotation scheme).
Re:Excellent question (Score:2, Insightful)
it doesn't seem that way... http://forums.freenas.org/threads/ecc-vs-non-ecc-ram-and-zfs.15449/