Ask Slashdot: Practical Bitrot Detection For Backups? 321
An anonymous reader writes "There is a lot of advice about backing up data, but it seems to boil down to distributing it to several places (other local or network drives, off-site drives, in the cloud, etc.). We have hundreds of thousands of family pictures and videos we're trying to save using this advice. But in some sparse searching of our archives, we're seeing bitrot destroying our memories. With the quantity of data (~2 TB at present), it's not really practical for us to examine every one of these periodically so we can manually restore them from a different copy. We'd love it if the filesystem could detect this and try correcting first, and if it couldn't correct the problem, it could trigger the restoration. But that only seems to be an option for RAID type systems, where the drives are colocated. Is there a combination of tools that can automatically detect these failures and restore the data from other remote copies without us having to manually examine each image/video and restore them by hand? (It might also be reasonable to ask for the ability to detect a backup drive with enough errors that it needs replacing altogether.)"
PAR2 (Score:5, Informative)
http://www.quickpar.org.uk/ [quickpar.org.uk]
http://chuchusoft.com/par2_tbb/ [chuchusoft.com]
Re: (Score:2)
I'm glad one person remembers optical media and its lack of this side effect
Funny, I remember optical media being unreadable just months after it was burned. Sure, you can say don't use cheap media, but how do you know your media is good?
Re: PAR2 (Score:5, Informative)
Use non-LTH BD-R media. It's seriously the best media we've ever had for long-term archival storage, hands-down, no contest. Unlike DVD+/-R, it's phase-change magneto-optical WORM... the laser liquefies the plastic, the magnet orients little shiny planar mirrors, the plastic solidifies, and the bits are about as close to 'carved in stone' as you're likely to ever get. As a technology, it's not cheap... but it definitely minimizes the number of things that can go wrong over a ~25-year timeframe:
* decouples media from its player... the achilles heel of hard drive-based backup schemes. A broken hard drive means a spectacularly expensive data-recovery job. A broken BD drive means buying a new one.
* phase-change MO media doesn't bleach or darken with age... and if it's going to delaminate or anything (like early optical discs often do), it's overwhelmingly likely to happen sooner rather than later (while you still have the originals available to re-archive if necessary).
* I think we can safely accept that future evolution to optical discs will remain downwards-compatible with reading older media. Seriously, CDs are THIRTY YEARS OLD, and any Blu-Ray player from China can still play them just fine (plus everything that's ever been commonly burned/stamped into them). A 2037 Apple Eve might have the masses drooling over its legacy-free minimalist purity, but the rest of us will have a 600 petabyte optical drive manufactured by a sweatshop in Uganda or Haiti that can read old BD-R discs just fine (at least, after opening it up and soldering a wire across two pads on the circuit board to make it think it's supposed to be their $6,000 enterprise version instead).
Re: (Score:2)
Optical has had a good run, but I'm betting that in 2037, optical will be dying or dead.
There's a lot of theoretical improvements left in optical disk technology, but they're unlikely to become common or cheap. I see possibly one generation after Blu-ray before the consumer standards stop and the access to cheap technology to drive advancements in optical storage disappears. Spinning disk is largely thought of as the primary competitor, but what's going to give optical the biggest headache is flash.
Flash
Re: PAR2 (Score:4, Informative)
EEPROM also happens to be the ancestor of SLC flash, not MLC, TLC or worse.
Flash is like a leaky bucket that starts out full of water, and gets drained to some level when a cell's value is set:
SLC == "The bucket is either totally empty (0), or has some water in it (1)"
MLC == "The bucket can be totally empty (00), non-empty to ~33% full (01), 33%-~66% full (10), or 66-100% full (10). After 1/3 the water leaks out, the cell's value is corrupt.
TLC == same idea as MLC, but the bucket has EIGHT levels instead of four. Do the math to figure out how much metaphorical water can leak out before the cell's value becomes corrupted.
BIOS eeproms are also a larger process than high-density flash, so the buckets themselves are larger while the leaks remain relatively constant in size. In other words, you're comparing a metaphorical 55 gallon drum with a slow drip that has to be completely empty to change from 1 to 0 to a thimble with 8 tick marks on the side and a leak of the same size.
Re: (Score:2)
yes, because rewritable disks have never gone wrong, right?
ZFS filesystem (Score:5, Informative)
One single cmd will do that,
zpool scrub
Re: (Score:2)
Re:ZFS filesystem (Score:5, Informative)
Agreed, ZFS does exactly this, though without the remote file retrieval portion.
To elaborate:
http://en.wikipedia.org/wiki/ZFS#ZFS_data_integrity [wikipedia.org]
End-to-end file system checksumming is built in, but by itself this will only tell you the files are corrupt. To get the automatic correction, you also need to use one of the RAID-Z modes (multiple drives in a software raid). OP said they wanted to avoid that, but for this kind of data I think it should be done. Having both RAID and an offsite copy is the best course.
You could combine it with some scripts inside a storage appliance (or old PC) using something like Nas4Free (http://www.nas4free.org/), but I'm not sure what it has "out of the box" for doing something like the remote file retrieval. What it would give is the drive health checks that OP was talking about; this can be done with both S.M.A.R.T. info and emailing error reports every time the system does a scrub of the data (which can be scheduled).
Building something like this may cost a bit more than for just an external drive, but for this kind of irreplaceable data it is worth it. A small atom server board with 3-4 drives attached would be plenty, would take minimal power, and would allow access to the data from anywhere (for automated offsite backup pushes, viewing files from other devices in the house, etc).
I run a nas4free box at home with RAID-Z3 and have been very happy with the capabilities. In this configuration you can lose 3 drives completely and not lose any data.
Re:ZFS filesystem (Score:5, Informative)
You don't need raidz or multiple drives to get protection against corrupt blocks with ZFS. It supports ditto blocks, which basically just means mirrored copies of blocks. It tries to keep ditto blocks as far apart from eachother on the disk as possible.
By default, ZFS only uses ditto blocks for important filesystem metadata (the more important the data, the more copies). But you can tell it that you want to use ditto blocks on user data too. All you do is set the "copies" property:
# zfs set copies=2 tank
Re: (Score:3)
true, but you do need multiple disks (mirrored or raidz) to protect against drive failure.
two or more copies of your data on the one disk won't help at all if that disk dies.
fortunately, zfs can give you both raid-like multiple disk storage (mirroring and/or raidz) as well as errror detection and correction.
That ZFS_data_integrity [wikipedia.org] link in the post you were replying to gives a pretty good summary of how it works.
The paragraphs immediately above that (titled 'Data integrity', 'Error rates in hard disks', and
Re: (Score:2, Informative)
I'm another fan of backups to disks stitched together with ZFS. In the last year I've had two cases where "zfs scrub" started to report and correct errors in files one to two months in advance of a physical hard drive failure (I have it scheduled to run weekly). Eventually the drives faulted and were replaced, but I had plenty of warning, and RAIDZ2 kept everything humming along perfectly while I sourced replacements.
For offsite backups I currently rotate offline HDD's, but I should move to Cloud storage. G
Re: (Score:2)
Re:BTRFS filesystem (Score:5, Informative)
I'll be the heretic here, but on Windows 8.1 and Windows Server 2012 R2, there is a feature called Storage Spaces. It works similar to ZFS where you toss drives into a pool, then create a volume that is either simple, mirror, or with parity, and Windows does the rest. If a volume needs more space, toss some more drives in the pool.
To boot, it even offers autotiering so data can be stored on a SSD that is frequently used, or remain on the HDDs if it isn't. Deduplication is handled on the filesystem level [1].
No, this isn't a replacement for a SAN with RAID 6 and real-time deduplication, but it does get Windows at least in the same ballgame as Oracle with ZFS.
[1]: Not active deduplication. The data is initially stored duplicated, but a background task finds identical blocks and adds pointers. Of course, the made from scratch filesystem, ReFS (which has the ability to check for bit rot on reads like ZFS), doesn't have this, so one is still stuck with NTFS for this feature.
Re:BTRFS filesystem (Score:4, Informative)
The only way to truly prevent bitrot is by maintaining at least three complete copies of the data, and regularly compare between them.
There you go again. Acting like you know what you're talking about, but you don't.
ZFS and BTRFS have a much more efficient way to ensure correctness: CRC [wikipedia.org] of everything written. That is what is checked when you do a zpool scrub or a btrfs scrub. Random errors are very unlikely to produce the same checksum, so then you only need a second copy that doesn't produce CRC errors.
Hard drives are nowhere near as reliable [acm.org] as their manufacturers claim. Modern drives don't store the bits that you feed them exactly as you give them. Instead, they use CRC and error correcting codes, [arstechnica.com] so they only need most of the data to be correct. Usually, if the data doesn't match the CRC, and it cannot be corrected by ECC, then you get a read error instead of corrupted data. Which, I guess, is better than getting a corrupted picture. Ideally, a RAID would be able to recreate the missing block, but I can't find any reference to a RAID doing that.
But I've seen enough errors that I suspect something else is going on. It surely doesn't help that modern computers have many gigabytes of memory, but almost none have ECC on that memory. Your computer can be corrupting your data, and you have no warning that it's happening. In addition, hard drives lie. [acm.org] I'm not optimistic about the long-term storage of electronic data.
Re: (Score:3)
There you go again. Acting like you know what you're talking about, but you don't. ZFS and BTRFS have ...
Exactly dick to do with what I said. The filesystem doesn't matter. The operating system doesn't even matter.
Modern drives don't store the bits that you feed them exactly as you give them. Instead, they use CRC and error correcting codes, so they
... Which again counts for exactly dick. I'm talking about infrastructure and architecture, while you're blubbering on about the hardware.
Which, I guess, is better than getting a corrupted picture. Ideally, a RAID would be able to recreate the missing block, but I can't find any reference to a RAID doing that.
That's because you have no experience as a network administrator in a professional environment. Because then you'd know that's the very thing RAID was designed to do: Recover from hardware failure, which includes sectors becoming unreadable. You are clearly confuse
Re: (Score:3, Informative)
RAID10 and similar systems are two RAID5 systems which are independent and regularly compare data; These can detect which system is inconsistent, so you will always have at least one copy of your data in a consistent state.
You were doing quite well up until you said that sentance .....
Checksums? (Score:2)
Re: (Score:3)
.
Once a week, I use openssl to calculate a checksum for each file; and I write that checksum, along with the path/filename, to a file. The next week, I do the same thing, and I compare (diff) the prior checksum file with the current checksum file.
With about a terabyte of data, I've not seen any bitrot yet.
Long term, I plan to move to ZFS, as the server's disk capacity will be rising significantly.
Re: (Score:2)
You are assuming you started with good files.
No assumption on my part. I did start with good files. :)
In the submitter's case, he started with some good files, some unknown number of bad files, etc.
That's not how I read the comment. From the OP:
With the quantity of data (~2 TB at present), it's not really practical for us to examine every one of these periodically so we can manually restore them from a different copy.
That sound to me as if he wants to check the files from time to time and locate ones that have gone bad.
Re: (Score:2, Interesting)
Periodically checking them is the important part that no one seems to want to do.
A few years back we had a massive system failure and once we recovered the underlying problems and began recovery we found that most of the server image backup tapes for 6 months+ could not be loaded. The ops guys took a severe beating for it.
You think this stuff will never happen but it always does. We had triple redundancy with our own power backups but even that wasn't on a regular test cycle. Some maintenance guy left the s
Re: (Score:2)
weekly zfs scrub does the checks for you.
Re:Checksums? (Score:5, Informative)
I never archive any significant amount of data without first running this script at the top:
find -type f -not -name md5sum.txt -print0|xargs -0 md5sum >> md5sum.txt
It's always good to run md5sum --check right after copying or burning the data. In the past, at least a couple of percent of all the DVDs that I've burned had some kind of immediate data error
(A while back, I rescanned a couple of hundred old DVDs that I burned ranging up to 10 years old, and I didn't find a single additional data error. I think that a lot of cases where people report that DVDs deteriorate over time, they never had good data on them in the first place and only discover it later.)
Re: (Score:2)
I don't have a large amount of critical data to backup (mostly documents for research). I've been using PAR (or rather relying on it) to verify and correct errors when recovering data.
That said, I realize I should probably also have a checksum. Should one consider a different algorithm then MD5, for example to prevent collisions of the hashes?
Re: (Score:2)
While MD5 isn't really secure against intentional attacks any more, the probability of an random collision is still negligible.
I originally started using MD5 for this purpose because in a test I did many years ago one some machine, md5sum actually ran faster than cksum. The shorter cksum data also does have a chance to generate hash collisions on reasonable sized data sets, although that probably doesn't matter too much for just disk error checking. I don't use the newer algorithms because they're overkill
Re: (Score:2)
ZFS (Score:5, Interesting)
ZFS without RAID will still detect corrupt files, and more importantly tell you exactly which files are corrupt. So a distributed group of ZFS drives could be used to rebuild a complete backup by only copying uncorrupt files from each.
You still need redundancy, but you can get away without the RAID in each case.
Par2 and Reed-Solomon (Score:2)
Bitrot does happen.
When a disk has a bad block and detects that, it will try to read the data from it and put it on a block from the reserve-pool. However, the data might be bad and corrupt, so you lose data.
Disks do have a Reed-Solomon (aka par-files) index, so it can repair some damage, but it doesn't always succeed.
Anyway, what I do for important things, is have par2 blocks that go along with the data. All my photo-archives have par2 files attached to them.
I reckon you could even automate it. To have a s
A paranoid setup (Score:5, Interesting)
If you really want hassle free and safe, it would be expensive, but this is what I would do:
ZFS for the main storage - Either using double parity via ZFS or on a raid 6 via hardware raid.
Second location - Same setup, but maybe with a little more space
Use rsync between them using the --backup switch so that any changes get put into a different folder.
What you get:
Pretty disaster tolerant
Easy to maintain/manage
A clear list of any files that may have been changed for *any* reason (Cryptolocker anyone?)
Upgradable - just change drives
Expense - You can build it for about $1800 per machine or $3600 total if you go full-on hardware raid. That would give you about 4TB storage after parity (4 2TB drives - $800, Raid Card - $500, basic server with room in the case - $500)
What you don't get: Lost baby pictures/videos. I've been there, and I'd pay a lot more than this to get them back at this point, and my wife would pay a lot more than I would..
Your current setup is going to be time consuming, and you're going to lose things here and there anyway.. If you just try to do the same thing but make it a little better, you're still going to have the same situation, just not as bad. In this setup you have to have like 5 catastrophic failures to lose anything, sometimes even more..
Re: (Score:2)
Expense - You can build it for about $1800 per machine or $3600 total if you go full-on hardware raid. That would give you about 4TB storage after parity (4 2TB drives - $800, Raid Card - $500, basic server with room in the case - $500)
Either use a RAID controller or use ZFS. It's not a good idea to use both at the same time.
Re: (Score:2)
I've used them together. Seems to work just fine.. Just don't let ZFS know that there's more than 1 drive. You can't have them both trying to manage the redundant storage.
ZFS has some great features besides it's redundant storage. You can get them from other filesystems too though I suppose, but I like snapshots built into the filesystem. It *is* overkill to have the filesystem doing checksums and the raid card detecting errors as well, but that's why this is the paranoia setup... Not really looking for t
Re: (Score:3)
> Just don't let ZFS know that there's more than 1 drive.
That is *precisely* the wrong thing to do. As in, the exact opposite of how you should do it.
Instead, configure the RAID card to be JBOD and let ZFS handle the multiple-drive redundancy (raidz and/or mirroring), as well as the error detection and correction.
Otherwise, there is little or no benefit in using ZFS. ZFS can't correct many problems if it doesn't have direct control over the individual disks, and RAID simply can't do the things that ZF
Re: (Score:2)
Never use a RAID controller, period. ZFS builtin RAIDZ is far superior in every way.
Re: (Score:2)
Use rsync between them using the --backup switch so that any changes get put into a different folder. ...
A clear list of any files that may have been changed for *any* reason (Cryptolocker anyone?)
+1 Clever.
Re:A paranoid setup (Score:4, Informative)
good post, except for three details:
1. if you're using ZFS on both systems, you're *much* better off using 'zfs send' and 'zfs recv' than rsync.
do the initial full copy, and from then you can just send the incremental snapshot differences from then on.
one advantage of zfs send over rsync is that rsync has to check each file for changes (either file timestamp or block checksum or both) every time you rsync a filesystem or directory tree. With and incremental 'zfs send', it only sends the incremental difference between the last snapshot sent and the current snapshot.
you've also got the full zfs snapshot history on the remote copy as well as on the local copy.
(and, like rsync, you can still run the copy over ssh so that the transfer is encrypted over the network)
2. your price estimates seem very expensive. with just a little smart shopping, it wouldn't be hard to do what you're suggesting for less than half your estimate.
3. if you've got a choice between hardware raid and ZFS then choose ZFS. Even if you've already spent the money on an expensive hardware raid controller, just use it as JBOD and let ZFS handle the raid function.
WinRAR... (Score:2)
WinRAR isn't perfect, but it works on a number of platforms, be is OS X, Windows, Linux, or BSD. This provides not just CRC checking, but one can add recovery records for being able to repair damage. If storing data on a number of volumes (like optical media), one can make recovery volumes as well, so only four CDs out of a five CD set are needed to get everything back.
It isn't as easy as ZFS, but it does work fairly well for long term archiving, and one can tell if the archive has been damaged years to d
BTRFS or ZFS (Score:2)
BTRFS and ZFS both do checksumming and can detect bit-rot. If you create a RAID array with them (using their native RAID capabilities) they can automatically correct it too. Using rsync and unison I once found a file with a nice track of modified bytes in it -- spinning rust makes a great cosmic ray or nuclear recoil detector. Or maybe the cosmic ray hit the RAM and it got written to disk. So, use ECC RAM.
But "bit-rot" occurs far less frequently than this: I find is that on a semi-regular basis my ent
Re: (Score:2)
Just say no to BTRFS. Use ZFS with RAIDZ.
RAID + redundancy (Score:2)
There's really no way around it. Storage media is not permanent. You can store your important stuff on RAID but keep the array backed-up often. RAID is there to keep a disk*N failure from borking your production storage and that's it. If you can afford cloud storage, encrypt your array contents (encfs is good) and mirror the contents with rsnapshot [rsnapshot.org] or rsync [samba.org] to amazon, dropbox, a friends raid array, whatever. SATA drives are cheap enough to keep a couple sitting around to just plug in and mirror to every w
Just get a carbonite account (Score:2)
I have been going through this issue myself. In a single weekend of photo and video taking, I can easily fill up a 16 gig memory card, sometimes a 32 gig. About 10 years ago I lost about two years worth of pictures due to bitrot (ie my primary failed, and the backup DVD-Rs were unreadable after only a year - I was able to recover only a handfull of photos using disc-recovery software). Since then, I kept at least three backups, and reburning discs every couple of years. But if I can fill up two BD-Rs in a w
Re: (Score:3)
how can you be sure that your cloud provider is not suffering from bitrot on your stored files?
http://en.wikipedia.org/wiki/Carbonite_(online_backup)#Product_details [wikipedia.org]
Works for me - better than what I have going on at home, and cheaper than I could set up something like this. And anyways, I still have my External HDD backups as well. Its just another level of backup to keep me from data loss.
Bacula (Score:2)
Have mercy! (Score:5, Funny)
We have hundreds of thousands of family pictures and videos we're trying to save using this advice. But in some sparse searching of our archives, we're seeing bitrot destroying our memories. With the quantity of data (~2 TB at present),
As the proud owner of dozens of family photo albums, a stack of PhotoCDs etc which rarely see the light of day, the bigger challenge is whether anyone will ever voluntarily look at those terabytes of photos. Having been the victim of excruciating vacation slide shows that only consisted of 40-50 images on a number of occasions (not to mention the more modern version involving a phone/tablet waving in my face), I can only imagine the pain you could inflict on someone with the arsenal you are amassing.
The old-fashioned method (Score:5, Interesting)
Don't forget the old-fashioned method: make archival prints of your photos and spread copies among your relatives. Although that isn't practical for "hundreds of thousands", it is practical for the hundreds of photos you or your descendants might really care about. The advantage of this method is that it is a simple technology that will make your photos accessible into the far future. And it has a proven track record.
Every other solution I've seen described here better addresses your specific question, but doesn't really address your basic problem. In fact, the more specific and exotic the technology (file systems, services, RAID, etc.) the less likely your data is to be accessible in the far future. At best, those sorts of solutions provide you a migration path to the next storage technology. One can imagine that such a large amount of data would need to be transported across systems and technologies multiple times to last even a few decades. But will someone care enough to do that when you're gone? Compare that to the humble black-and-white paper print, which if created and stored properly can last for well over a hundred years with no maintenance whatsoever.
Culling down to a few hundred photos may seem like a sacrifice, but those who receive your pictures in the future will thank you for it. In my experience, just a few photos of an ancestor, each taken at a different age or at a different stage of life, is all I really want anyway. It's also important to carefully label them on the back, where the information can't get lost, because a photo without context information is nearly meaningless. Names are especially important: a photo of an unknown person is of virtually no interest.
Sorry I don't have a low-tech answer for video, but video (or "home movies", as we used to call it) will be far less important to your descendants anyway.
Re:The old-fashioned method (Score:4, Interesting)
So what good is a bunch of pics or videos of long past events except to the person involved? Digital images today, unless meticulously managed and edited do little good for historical purposes like the photo album of yesterday. Especially if those are locked away in some online archive that may or may not be easily accessed if the owner can keep up with format and company changes over the decades they will have them and descendants know where they are.
Prepare for maintainer-rot, too (Score:4, Interesting)
A family archive maintained by the "tech guy/gal" in the family is also subject to failure from death or disability or the aforementioned maintainer. Any storage/backup solution should therefore be sufficiently documented (probably on paper, too) that the grieving loved ones can get things back after a year or two of zero maintenance and care of the system. That would also imply eschewing home-brew type systems in favor of using standard tools so a knowledgeable tech person not familiar with the creator's original design can salvage things in this tragic but possible scenario. Document the system so even if the family can't do it themselves, and an IT guy has to be contracted to resurrect the data, he'll have the information needed to do so.
Any system sufficiently dependent on regular maintenance by just one particular person is indistinguishable from a dead-man time-bomb.
You need an editing plan more than a backup plan (Score:5, Interesting)
Photos = Lightroom plus DNG on a Drobo (Score:3)
ZFS, of course (Score:3)
ZFS (especially when your dataset-size increases and you add more RAM) is picky about that, too.
Bit-rot does not only occur in hard-disks or flash.
You should really, really take a hard look at every set of photos and select one or two from each "set", then have these printed (black and white, for extra longevity).
If this results in still too many images, only print a selection of the selection and let the rest die.
Back up more frequently and to more places (Score:2)
The solution to Bitrot and reading of old media is very simple and honestly I don't know why it comes up so much. Storage is DIRT CHEAP. 2TB of Data is NOTHING, you can get a 3TB+ external drive for $100 or even less on sale. Buy 3 drives, keep 1 in SAFELOCATION*, Back up to 1 drive every even week, and the second one every odd week, and once a month swap the one in the SAFELOCATION out for a local one and repeat the cycle. Increase or decrease frequency of SAFELOCATION swapping depending on level of paran
Checksumming + sufficient redundancy (Score:2)
We wrote our own parallel filesystem to handle just that. It stores a checksum of the file in the metadata. We can (optionally) verify the checksum when a file is read, or run a weekly "scrubber" to detect errors.
We also have Reed-Solomon 6+3 redundancy, so fixing bitrot is usually pretty easy.
Errors While Copying (Score:2)
And given that this seems to be a common problem, why in the holiest of hells does the cp command not have a verify option? Yeah, it's easy enough to
Re: (Score:2)
ZFS is one option, Glacier is worth looking at. (Score:2)
I've used ZFS under Linux for 5 years now for exactly this sort of thing. I picked ZFS because I was putting photos and other things on it for storage that I wasn't likely to be looking at actively and wouldn't be able to detect bit-rot until it was far too late. ZFS has detected and corrected numerous device corruption or unreadable issues over the years and corrected them, via monthly "zpool scrub" operations.
I have been backing these files up to another ZFS system off-site. But now I'm starting to loo
MD5 and a few scripts (Score:3)
Here's a cheap easy solution (assuming you can write some basic scripts)
1. Start by taking an MD5 of all your pics.Save the results.
2. Backup everything to a 2nd drive. Take MD5s and be sure they match using basic scripts.
3. Perioducally scan drive 1 and 2 and compare against their expected MD5 value. If one has changed, copy it from the other (assuming it is still correct)
You could expand this with more drives if you are extra paranoid. You could do this cheap, check regularly, and know when bitrot is happening.
surprising recovery (Score:2)
I think that when writable CDs first came out, we thought that they would last forever. And in some sense they do last long enough. The other day I found a CD binder full of games and a few backups from 1996. The most surprising of all was a collection of photos that I thought had been long lost, and with a little rsync running over and over and over, I got all the files off intact and saved them to my Flickr account.
The most important thing to understand, I think, is that we have to look at digital storage
Re: (Score:3)
A much better solution would be archival quality Blue-Rays. They can hold 25 GB apiece and they're supposed to last 100 years, but they really just need to last long enough until a new, even denser storage media comes along.
Re:Excellent question (Score:5, Informative)
Not all cloud storage is expensive. It's only $4 a month for unlimited backups to CrashPlan.
They also do checksums and versioning and can be set to never remove deleted files from the backup.
I have 12.8TB backed up to them and it's been working great.
Other than that, ZFS can't be beat. I use that as well.
Re: (Score:2)
Re: (Score:2)
I'm curious how that is doable. Even Amazon Glacier would be about $10.24 per terabyte stored per month, so I'd be looking at about $130/month for that much info.
I am not passing judgement... just have not heard much about CrashPlan, good/bad other than a quick search on it.
Re: (Score:2)
Users that utilize large amounts of storage are relatively uncommon and are subsidized, in part, by users who utilize less storage. If everyone used terabytes of storage at $4/month, that wouldn't really be sustainable.
Although just a personal anecdote, I've used CrashPlan for ~4 years now (with 11 computers belonging to various family members all backing up to their service with a total of around 500GB being stored with them). Zero complaints. It's done everything I expected, always worked, and never had i
Re: (Score:2)
It's only $4 a month for unlimited backups to CrashPlan.
Do they throttle? I looked into the one that advertises unlimited backups for $60/yr and they rate limit the connection down as you increase your data. I estimated 9 years for the first backup to complete based on published rates.
"Unlimited" - IDTIMWYTIM.
Re: (Score:2)
"IDTIMWYTIM." should be worked out to be SOMEIDIOT
Re:Excellent question (Score:4, Interesting)
In reality, Dropbox, Skydrive, and other cloud services should be treated as a type of media, just like BD-ROMs, tape, SDD, HDD, and even hard copy.
The trick is to use different media to protect against different things. My Blu-Ray disks protect an archive against tampering or CryptoLocker (barring a hack that flashes the BD burner's ROM to allow the laser to overwrite written sectors.) However, they have to be maintained in a good environment with a good indexing system. My files stashed on Dropbox bring me accessibility virtually anywhere... but malware that erases files could wipe that volume out in no time.
Similar with external HDDs. Those are great for dealing with a complete bare metal restore, but provide little to no protection against malware. Tape, OTOH, is expensive for the drive and requires a fast computer, but once the read-only tab is flipped or the WORM session is closed, the data is there until the tape is physically destroyed.
Of course, there is not just media... there are backup programs. This is why I use the KISS principle when it comes to backups. I use an archiving utility to break up a large backup into segments (with recovery segments to allow the archive to be repaired should media go bad), then burn the segments onto optical media.
I've found that using a backup utility can work well... until one has to restore, the company is out of business, and one can't find the CD key or serial number so the software will install. One major program I used for years worked excellently... then just refused to support new optical drives (as in ignoring them completely.) So, unless I can find a DVD drive on its antiquated hardware list on eBay, all my backups are inaccessible. I was lucky enough to find that and copy the data to a HDD, but using the lowest common denominator is a good thing.
Backups are the often neglected underbelly of the IT world. While storage, security, availability and other technologies have advanced significantly, backups on the non-enterprise level are still languishing behind in almost every way possible. It was only a few years ago that encryption became standard with backup utilities [1].
[1]: With encryption comes key management, and some backup programs make that easy, some make it incredibly hard.
Re: (Score:3, Insightful)
Bitrot is a myth in modern times. Floppies and cheap-ass tape drives from the 90s had this problem, but anything reasonably modern (GMR) will read what you wrote until mechanical failure.
The key therefore is to verify as you write. Usually, verifying a sample of a few GB will let you know if everything went OK. DO your backups with checksums of some sort. A modern tape drive and backup software will do that automatically, and let you schedule a verify automatically as part of backups (2 TB? That's 1 ta
Re: (Score:2)
Bitrot is a myth in modern times.
You state this without any substantiation as if it were a fact.
Re: (Score:3)
Well, I did backup software and hardware for nearly 20 years. But I can't substantiate that with a link.
Re:Excellent question (Score:5, Interesting)
Bitrot is a myth in modern times. Floppies and cheap-ass tape drives from the 90s had this problem, but anything reasonably modern (GMR) will read what you wrote until mechanical failure.
This isn't just wrong, it's laughably wrong. ZFS has proven that a wide variety of chipset bugs, firmware bugs, actual mechanical failure, etc are still present and actively corrupting our data. It applies to HDDs and flash. Worse, this corruption in most cases appears randomly over time so your proposal to verify the written data immediately is useless.
Prior to the widespread deployment of this new generation of check-summing filesystems, I made the same faulty assumption you made: that data isn't subject to bit rot and will reproduce what was written.
ZFS or BTRFS will disabuse you of these notions very quickly. (Be sure to turn on idle scrubbing).
It also appears that the error rate is roughly constant but storage densities are increasing, so the bit errors per GB stored per month are increasing as well.
Microsoft needs to move ReFS down to consumer euro ducts ASAP. BTRFS needs to become the Linux default FS. Apple needs to get with the program already and adopt a modern filesystem.
Re: (Score:2)
You hit the nail on the head. Apple should either get with Oracle and put ZFS back in the OS X kernel as the default filesystem, get with Microsoft and license ReFS. HFS+ was a good filesystem when OS X hit the market, but it has been over a decade, and everyone else has moved on.
One reason why the IT industry moved from RAID 5 to RAID 6 as a standard is because even though disk capacities are growing, but I/O is not keeping pace. So, it takes longer and longer to rebuild a drive. RAID 6 is now a must b
Re: (Score:2)
And anyone who thinks electromagnetic tape is "dead" is naive or just ignorant. People have been predicting the death of tape for decades, and it's no more true today than it was in the 70's. Modern EM tape is typically rated for 15 to 30 years of retention, and as long as it is not over-exposed to moisture during storage, it has proven to be able to last that long: otherwise, the manufactu
Do not defrag ? Definitely do not over clock. (Score:3)
ZFS has proven that a wide variety of chipset bugs, firmware bugs, actual mechanical failure, etc are still present and actively corrupting our data.
And I expect that defragging aggravates this. Read a perfectly good block of data from disk into flaky RAM, have a bit flip, and write out that corrupted data to its new location. Even if the software is verifying its likely to verify against RAM and it did successfully write what is in RAM.
And then there is over clocking. If a computer is just used for gaming, no problem. But if its used for more serious things or archiving things of value to you then you may want to pass on over clocking. Folks who say
Re: (Score:2)
I've found damaged SAS cables, JBOD enclosures with dodgy bridges, etc. because of ZFS.
With that all sai
Re: (Score:2, Insightful)
it doesn't seem that way... http://forums.freenas.org/threads/ecc-vs-non-ecc-ram-and-zfs.15449/
Re: (Score:3, Interesting)
I've been surprised by the lack of reference of proper error checked data paths so far in these comments. I'm continually saddened by ever increasing aggressiveness in clocks and density of RAM in consumer level systems while stubbornly refusing to implement ECC. Many people are even hostile to the idea as if ECC RAM is somehow tainted.
This article points out something else I'd not even considered. A scenario where lack of ECC on a self healing file system can amplify a RAM failure to a catastrophic degree
Re: (Score:2)
Re: (Score:2)
You make a great point about CD-Rs, I guess I should have broadened my statement to "cheap-ass backup solutions from the 90s", not just floppies and tape.
Re: (Score:2)
Re: (Score:2)
Oh, really? Is that why drive manufacturers specify a non-recoverable read error rate - typically on the order of 1 bit per 100 terabits [wdc.com]? Let's see now. A single 4TB drive contains 32 terabits of data. So if you have three of them, either in a RAID or separately, and you try to read the entire contents, you can expect an av
Re: (Score:2)
The error rate from other sources (e.g. on the network copy) is far higher. If your backups are corrupt, it's almost certain they were corrupt day 1.
Test your backups after you make them: it's a cheap and easy 99% solution.
Re: (Score:2)
"We're experiencing data going bad and not being restorable from back-ups because it just CORRUPTS itself for no visible reason" "That's a myth and doesn't actually happen."
HIV was created by racist bigots to slander blacks and homosexuals.
Re: (Score:2)
I've investigated hundreds of cases of "bit rot" over the years in my job, and other than very weak magnetic media (or CD-Rs as someone upthread pointed out), corrupt backups were always corrupt when written. Had the poor SOB only verified his backups day 1, he'd not be in a world of shit. Every single time.
Re:Excellent question (Score:4, Interesting)
Re: (Score:3)
Cloud and complete security together is an oxymoron.
Re: (Score:2)
Re: (Score:2)
It depends on your storage needs. For things that you need to regularly access, Amazon S3 will cost you about $175/month for 2TB storage plus transfer fees, but is readily accessible at any time.
Amazon Glacier would only cost you $20/month for that amount of storage, but has various limitations on retrieval time (~4 hour minimum) and higher costs if you need to retrieve more data in a shorter amount of time. As the name suggests, it's designed for "cold storage".
Both offer extremely high degrees of reliabil
Re: (Score:2)
So if someone doesn't have your level of expertise on a single isolated topic you automatically dismiss this person as unworthy of your company?
This is why people don't like you.
Re: (Score:2)
What is the most practical way to maintain bitwise accuracy on a diverse set of binary data in an automated way using "diff and md5sum"?
Note that part where he was looking for an automated solution that will run itself without intervention, or a better means than hard drives...
You suggested... "Do some manual stuff using hard drives".
Right.
Re: (Score:2)
"Thanks for immediately jumping down my throat, though ;)"
Yeah. 'Cause you're the victim. WTF? Someone calls you out for being dickish, and they're jumping down your throat?
Re: Rewritten for /. (Score:2)
Re: (Score:2, Informative)
Warning for all UNIX newbies: that command will reset the file to 0 bytes. Just that you know.
(I've seen some cases when a rookie is setting up a Linux system and people jokingly throw him these "rm -rf /" commands and the poor guy actually ends up wrecking his system.)
Re: (Score:2)
Warning for all UNIX newbies: that command will reset the file to 0 bytes. Just that you know.
(I've seen some cases when a rookie is setting up a Linux system and people jokingly throw him these "rm -rf /" commands and the poor guy actually ends up wrecking his system.)
I think the general consensus is that if you're stupid enough to run a command you got from SomeRandomInternetAsshole420 without verifying what it will do first, you deserve to have your system wiped.
Re: (Score:3)
Re: (Score:2)
And yet, one of FLOSS's selling points is our great community support...
Every community with a notable population size is going to have its share of bad actors.
Besides, ever since you were a kid you've been taught to not trust strangers based on their word alone.
Re: (Score:2)
WARNING: DO NOT RUN ANY COMMAND IN THE PARENT, THIS COMMENT OR ANY OF THE SIBLING COMMENTS.
Unless you are working on the nsa's main database. Then you should run these commands several times, just To be sure the backup is complete. Then take a sledge hammer to the original files, for securit. And restore from the backup, to guarantee the backup worked.
Book a flight to Moscow first though
That's what some RAID levels _could_ be for (Score:2)
A two-disk RAID1, or a RAID5, theoretically ought to be able to detect when there's corruption, but shouldn't be able to correct it. If you've got two different data values, you don't know which one is right.
But it occurs to me: RAID6 (or three-or-more disk RAID1) really ought to be able to correct. Imagine a three-disk RAID1: if two disks say a byte is 03 and one disk says 02, then 03 is probably right. RAID6, similarly, has enough information to be able to do the kinds of repairs that you could do with
Re: (Score:3)
Re: (Score:2)
I just wish LTO drives were cheaper. Otherwise, they would be ideal for backups because they support encryption on the drives themselves. All LTO-4 tapes and newer support this, so any LTO-4 drive given the right key can decrypt another drive's tape.
Of course, WORM media is always nice, especially with malware being a constant threat.
Re: (Score:2)
You really gotta be careful with that attitude. The photos seem worthless at the time you take them, and most of them remain worthless forever. Most of them. Then you see that old picture of when your now-grown-up dog used to be a cute little puppy, and awww!!!
Re: (Score:2)
Jesus Christ, take it easy, man. I was making a harmless joke that anyone who was ever forced to watch boring holiday slideshows would be able to understand. Now I'm being accused of mental health issues, not being able to procreate and whatever else.
If hundreds of thousands of family pictures doesn't seem a bit excessive to you, so be it. After all, it takes only a few weeks to sort through them. But please calm down a little and stop spamming AC troll posts.
Re: (Score:2)