Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Data Storage

Error-Proofing Data With Reed-Solomon Codes 196

ttsiod recommends a blog entry in which he details steps to apply Reed-Solomon codes to harden data against errors in storage media. Quoting: "The way storage quality has been nose-diving in the last years, you'll inevitably end up losing data because of bad sectors. Backing up, using RAID and version control repositories are some of the methods used to cope; here's another that can help prevent data loss in the face of bad sectors: Hardening your files with Reed-Solomon codes. It is a software-only method, and it has saved me from a lot of grief..."
This discussion has been archived. No new comments can be posted.

Error-Proofing Data With Reed-Solomon Codes

Comments Filter:
  • slow news day anyone?

    • by Nefarious Wheel ( 628136 ) on Monday August 04, 2008 @12:48AM (#24462493) Journal
      Arrrh, aye, this be done since the dawn of time, matey! Ever since the days before global warming when pirates kept a second pistol in their belt just in case. Cap'n Jack Reed in the Solomons would harden his data with a second powder charge when the occasion demanded it.

      "Awk! Parroty Error! Parroty Error! Pieces of Seven, Pieces of Seven"

      (*BOOM*) never did like that bird.

    • Indeed. TFA should get to me when it discovers LDPC.
      • by xquark ( 649804 )

        LDPC based codes only work for pure erasure channels and do NOT work for static error channels. How does one perform loopy-belief propagation when the error probability distribution of the medium (in this case the disk) can not be modeled correctly?

    • Re: (Score:2, Funny)

      Bah, I never make any erorrs
  • salkdffalkfhwefh2ihr5j45!"Â5jkcq2%"45wceh5 234j5cja4h5c2q4x524qZTkzzj3kzg3qkgl3kzgq3kjgh kq3gkzlq3hwgjlh 34qlgch34ljkw93q0x45c45 #&%#%&5vcXÂ%YXCHGC%ub64bVE5&UBy4vy5yc5E&Â E%vu64EV46rcuw4&C/4w6
  • by Rene S. Hollan ( 1943 ) on Sunday August 03, 2008 @05:52PM (#24459631)

    ... at least CDROMs employ RS codes.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      Yeah, but is there another problem elsewhere in the system? I have an el-cheapo USB-PATA adapter with a MTBF (mean time to bit flip) of about 2 hours. Every other disk was ruined, and I only knew because of a sick little obsession with par2. Disk data is ECC'd, PATA data is parity checked, and USB data is checksummed. Still, inside one little translator chip, all that can be ruined. and that's why data integrity MUST be an operating/file system service.

    • by xquark ( 649804 ) on Monday August 04, 2008 @01:15AM (#24462645) Homepage

      My understanding is that it is possible to drill a few holes no larger than 2mm in diameter equally spread over the surface of an "audio cd" and with the help of h/w RS erasure decoding, channel interleaving and channel prediction (eg:probabilistically reconstruct missing right channel from known left channel) one can produce a near perfect reconstruction - that's what usually happens to overcome scratches and other kinds of simple surface defects.

    • But not every drive/OS seems to really use it. Some anecdotal evidence:

      Back in the Windows 9x days, I helped a friend of mine to reinstall Windows (98 IIRC). After copying files to the HD and restarting, the newly installed system always crashed. There was no error message that indicated there was a problem with reading the data from the CD-ROM (a Liteon which was known to be a bit dodgy).

      We finally tried another drive and it worked on the first try. I conclude that there were unreported errors reading from

    • .. at least CDROMs employ RS codes.

      RS codes are good at corecting randomly scattered bit errors. The error mode in CDs is missing chunks (eg scrathces). So, they use a mechanism which scatters bits around. When a scratch (correlated errors) is descattered, they become randoml sacttered errors, so the RS codes can do their job.

  • ZFS? (Score:3, Interesting)

    by segfaultcoredump ( 226031 ) on Sunday August 03, 2008 @05:53PM (#24459653)

    Uh, is this not one of the main features of the ZFS file system? It does a checksum on every block written and will reconstruct the data if an error is found? (assuming you are using either raid-z or mirroring. Otherwise it will just tell you that you had an error).

    • Re:ZFS? (Score:5, Informative)

      by xquark ( 649804 ) on Sunday August 03, 2008 @05:59PM (#24459719) Homepage

      checksums really only help in detecting errors. Once you've found errors, if you have an exact redundancy somewhere else you can repair the errors. What reed-solomon codes do is provide the error detecting ability but also the error correcting ability whilst at the same time reducing the amount of redundancy required to a near theoretical minimum.

      btw checksums have limits on how many errors they can detect within lets say a file or other kind of block of data. A simple rule of thumb (though not exact) is that 16 and 32 bit checksums can detect upto 16,32 bit errors respectively anymore and the chance of not detecting every bit error goes up, it could even result in not finding any errors at all.

      • by atrus ( 73476 )
        ZFS does maintain ECC codes to aid in correction. Even on single disks.
      • ZFS checksums are actually hashes, as in "cryptographic hash", so they're pretty damn reliable (though theoretically 100% reliable) at detecting errors.

      • Re:ZFS? (Score:4, Informative)

        by this great guy ( 922511 ) on Monday August 04, 2008 @03:56AM (#24463433)

        I have been a ZFS user for a while and know a lot of its internals. Let me comment on what you said.

        checksums really only help in detecting errors.

        Not in ZFS. When the checksum reveals silent data corruption, ZFS attempts to self-heal itself by rewriting the sector with a known good copy. Self-healing is possible if you are using mirroring, raidz (single parity), raidz2 (dual parity), or even a single disk (provided the copies=2 filesystem attribute is set). The self-healing algorithm in the raidz and raidz2 cases is actually interesting as it is based on combinatorial reconstruction: ZFS makes a series of guesses as to which drive(s) returned bad data, it reconstructs the data block from the other drives, and then validates whether this guess was correct or not by verifying the checksum.

        checksums have limits on how many errors they can detect.

        All the ZFS checksumming algorithms (fletcher2, fletcher4, SHA-256) generate 256-bit checksums. The default is fletcher2 and offers very good error detection (even errors affecting more than 256 bits of data) assuming unintentional data corruption (the fletcher family are not a cryptographic hash algorithms, it is actually possible to intentionally find collisions). SHA-256 is collision-resistant therefore it will in practice detect all data corruptions. It would be computationally infeasible to come up with a corrupted data block that still matches the SHA-256 checksum.

        A good intro to the ZFS capabilities are these slides [opensolaris.org]

        • by notany ( 528696 )

          The blocks of a ZFS storage pool form a Merkle tree in which each block validates all of its children. Merkle trees have been proven to provide cryptographically-strong authentication for any component of the tree, and for the tree as a whole. ZFS employs 256-bit checksums for every block, and offers checksum functions ranging from the simple-and-fast fletcher2 (the default) to the slower-but-secure SHA-256. When using a cryptographic hash like SHA-256, the uberblock checksum provides a constantly up-to-dat

      • checksums really only help in detecting errors. Once you've found errors, if you have an exact redundancy somewhere else you can repair the errors. What reed-solomon codes do is provide the error detecting ability but also the error correcting ability whilst at the same time reducing the amount of redundancy required to a near theoretical minimum.

        Having both, however, can be useful. For example, you can arrange your data in a rectangle and use standard error detecting checksums along the rows and RS on the columns. Knowing that particular rows may have an error can effectively double the error correction rate of Reed Solomon. See Erasure [wikipedia.org]

  • I've been burned by scratched DVD+Rs too many times. I'd be interested if there were a way to do this kind of thing in Windows..

    • by enoz ( 1181117 )

      The WinRAR archiver has an optional recovery record which protects against bad blocks.

      When you create an archive just specify the amount of protection you require (in practice 3% has served me well).

      • by xquark ( 649804 )

        I believe the underlying code used in RAR for the recovery record is in fact an RS(255,249) codes

    • Re: (Score:3, Informative)

      by Anonymous Coward

      The cross platform program dvdisaster [dvdisaster.net] will add extra information to your DVD as an error correcting code. Alternatively, you can make a parity file for an already-existing DVD and save it somewhere else.

      It actually has a GUI too, so it must be user friendly.

    • by Firehed ( 942385 )

      Bit-torrent?

      No seriously. I don't know a whole lot about network infrastructure (nor do I care, strictly speaking), but there's clearly some sort of error-checking/correcting going on behind the scenes as I'll grab huge disk images that pass verification before they get mounted (ex. iPhone SDK ~ 1.2GB) all the time. Some sort of network-based solution is really ideal for data transfer.

      Of course with residential upload speeds it's often slower than the ol' sneakernet (depends where it's going, how it's get

  • by inKubus ( 199753 ) on Monday August 04, 2008 @12:42AM (#24462461) Homepage Journal

    When he said "harden files", I thought he was going into a long soliloquy on all the porn on his computer, so I went to the next story.

  • by symbolset ( 646467 ) on Monday August 04, 2008 @12:45AM (#24462479) Journal

    Look, if it's secret, one copy is too many. For everything else, gmail it to five separate recipients. It's not like Google has ever lost any of the millions of emails I've received to date. (This is not a complaint -- they don't show me the spam unless I ask for it).

    And if they ever did lose an email, well, to paraphrase an old Doritos commercial, "They'll make more."

    Seriously, personally I view the the persistence of data as a problem. It's harder to let go of than it is to keep.

  • Speed? (Score:4, Interesting)

    by grasshoppa ( 657393 ) on Monday August 04, 2008 @12:51AM (#24462515) Homepage

    My question is of speed; this seems a promising addition to anyone's back up routine. However, most folks I know have 100s of gigs of data to back up. While differentials could be involved, right now tar'ing to tape works fast enough taht the backup is done before the first staff shows up for work.

    I assume we're beating the hell out of the processor here; so I'm wondering how painful is this in terms of speed?

    • Re: (Score:3, Informative)

      by Zadaz ( 950521 )

      Well since my $100 Radio Shack CD [wikipedia.org] player I bought in 1990 could do it in real-time I'm guessing that the requirements are pretty low. In fact a lot of hardware already uses it.

      If you read the rest of the page you find out it's very ingenious and efficient at doing what it does.

      While it's certainly not new (it's from 1960) or unused (hell, my phone uses it to read QR codes) I'm sure its something that has been under the radar of a lot of Slashdot readers, so I'll avoid making a "slow news day" snark.

    • Re:Speed? (Score:5, Interesting)

      by xquark ( 649804 ) on Monday August 04, 2008 @01:23AM (#24462691) Homepage

      The speed of encoding and decoding directly relates to the type of RS and the amount of FEC required. Generally speaking erasure style RS can go as low as O(nlogn) (essentially inverting and solving for a vandermonde or Cauchy style matrix) A more general code that can correct errors (the difference between an error and an erasure is that in the latter you know the location of the error but not its magnitude) may require a more complex process, something like Syndrome-Berlekamp Massey-Forney which is about O(n^2).

      It is possible to buy specialised h/w (or even GPUs) to perform the encoding steps (getting roughly 100+MB/s) and most software encoders can do about 50-60+Mb/ for RS(255,223) - YMMV

  • by XorNand ( 517466 ) * on Monday August 04, 2008 @12:55AM (#24462541)
    Please, please stop thinking of version control as some sort of backup. When we initially started mandating the use of version control software, developers would just using the "commit" button instead of the "save" button. It makes it *much* more difficult to traverse through the repo when you have three dozen commits per day, per developer, each commented with "ok. really should be fixed now." The worst offenders were issued an Etchasketch for a week while their notebooks went in for service *cough*. Problem solved.
    • Re: (Score:3, Insightful)

      by dgatwood ( 11270 )

      Well, you shouldn't commit until you believe you have it in a state where the changes are usable (i.e. don't break the tree), but beyond that, I'd rather see more commits of smaller amounts of code than giant commits that change ten thousand things. If you end up having to back out a change, it's much easier if you can easily isolate that change to a single commit. My rule is commit early, commit often. I'm not the only one, either:

      http://blog.red-bean.com/sussman/?p=96 [red-bean.com]

  • since it is only a "snapshot" of the data at a particular time. Any time you change the data, you have to do another "snapshot". What a major pain in the ass.

    This might be useful for archived files, but not something you change on a regular basis.
  • Yes, CDs and DVDs have error correction built in, but they don't do much if you happen to a nice scratch that follows the spin of the disk. I.e. a moderate scratch from the outside to the inside of a CD is reasonably OK for data, but a scratch the other way will kill your data much more easily.

    For a while I was using PAR2, yes, the PAR2 used on USENET, to beef up the safety of my DVD backups of my home data. Unfortunately, PAR2 never really evolved to handle subdirectories properly, which mattered when I

    • Re: (Score:3, Informative)

      by Fnord666 ( 889225 )

      Eventually, I started using ICE ECC, http://www.ice-graphics.com/ICEECC/IndexE.html [ice-graphics.com], free as in beer, to enhance my DVD backups of stuff like photos and data. IIRC, I tested it's ability to reconstruct missing files and it seemed OK at the time.

      Unfortunately this software looks like it is closed source and windows only. A program to apply error correcting codes to your archived files is only useful if you still have a platform to run it on. Hopefully 15 years from now when you go to recover your files y

    • Re: (Score:2, Insightful)

      by catenx ( 1101105 )
      The biggest limitation of PAR2 for me is the lack of directory handling. You can only create and verify parchives for the files within a directory. One solution is a script that runs the PAR creation or verification for each subdirectory but this is hardly elegant. Hard to use backup is backup that isn't used. A better solution is what ICE ECC offers.

      Agreeing with Fnord666, the software does not use an open algorithm. The general tone of the site is "use this software it is awesome, don't argue". There d
  • by InakaBoyJoe ( 687694 ) on Monday August 04, 2008 @01:28AM (#24462721)

    TFA introduces some new ".shielded" file format. But do we need yet another file format when PAR (Parchive) [wikipedia.org] has been doing the same job for years now? The PAR2 format is standardized and well-supported cross-platform, and might just have a future even IF you believe that Usenet is dying [slashdot.org]...

    I always thought it would be cool to have a script that:

    • Runs at night and creates PAR2 files for the data on your HD.
    • Occasionally verifies file integrity against the PAR2 files.

    With a system like this, you wouldn't have to worry about throwing away old backups for fear that some random bit error might have crept into your newer backups. Also, if you back up the PAR2 files together with your data, as your backup media gradually degrades with time, you could rescue the data and move it to new media before it was too late.

    Of course, at the filesystem level there is always error correction, but having experienced the occasional bit error, I'd like the extra security that having a PAR2 file around would provide. Also, filesystem-level error correction tends to happen silently and not give you any warning until it fails and your data is gone. So a user-level, user-adjustable redundancy feature that's portable across filesystems and uses a standard file format like PAR would be really useful.

    • by DrJimbo ( 594231 ) on Monday August 04, 2008 @02:11AM (#24462959)
      ... even though both TFA and PAR use Reed-Solomon.

      The difference is that TFA interleaves the data so it is robust against sector errors. A bad sector contains bytes from many different data blocks so each data block only loses one byte which is easy to recover from. If you use PAR and encounter a bad sector, you're SOL.

      PAR was designed to solve a different problem and it solves that different problem very well but it wasn't designed to solve the problem that is addressed by TFA. Use PAR to protect against "the occasional bit error" as you suggest, but use the scheme given in TFA to protect against bad sectors.

  • Doesn't par2 already employ reed-solomon? (http://en.wikipedia.org/wiki/Parchive [wikipedia.org])

    And it has all sorts of options let you configure the amount of redundancy you'd like?

    And it has (ahem) been very well tested in the recovery of incomplete binary archives ... ?

    Now that usenet has been stripped of binaries, we'll have to find other uses for these tools ....

  • by MoFoQ ( 584566 ) on Monday August 04, 2008 @01:30AM (#24462733)

    quickpar [quickpar.org.uk] especially has been in use on usenet/newsgroups for years....o yea...forgot....they are trying to kill it.

    anyways...there's also dvdisaster [dvdisaster.net] which now has several ways of "hardening".
    one of them seems to catch my attention: adds error correction data to a CD/DVD (via a disc image/iso)

  • I'm glad it's not just me thinking my drives are dying sooner than they once did.

    Why is storage quality going down, and what does that mean for that 1TB drive for $200 bucks? Will it's lifespan exceed two years?

    • Because everybody uses desktop quality SATA drives in enterprise RAIDs. And every vendor pushes density in desktop drives as hard as possible even though it's been getting more and more difficult.
      The market for high end "enterprise" drives is almost dead. When was the last time you saw a SCSI (FC,SAS) drive?
      There's nothing wrong with the basic approach but you have to do the math and use the correct AFR and TTR numbers. We just went from RAID 5 to RAID 6 because the observed drive failure rates were higher

      • by Hyppy ( 74366 )

        The market for high end "enterprise" drives is almost dead. When was the last time you saw a SCSI (FC,SAS) drive?

        I can look over my shoulder at approximately 45 FC drives, 60-70 Ultra320 SCSI Drives, and another dozen or two SAS drives. That's just from my little window into the server room.

        Enterprise class drives are far from dead. Unless, of course, you just can't afford them.

  • RAID6 [wikipedia.org] uses Reed-Solomon error correction. In fact, RAID5 can be viewed as a special case of RAID6.

    This thing looks like a solution in search of a problem. Slow news day?

  • Yes, this has been done forever etc. but has anyone experienced any ugly "bit rot"? I mean, I've had firewalls that would checksum applications and if it ever complained about surprise changes I didn't catch it. Equally I have about 100GB for which I have CSVs - no spontanious corruption to note. Source code should very easily fail to compile if a random bit was flipped, also can't think of any case. I guess if it's that important having a PAR file with some recovery data won't hurt but first you I'd take R

  • Channel noise can be overcome via increased redundancy in transmission/storage, thereby reducing the effective transfer rate/storage density. Film at 11.

    I could be wrong, but I'm pretty sure this is why we have on-disk (and on-bus) checksums and ECC RAM. And frankly if your mission-critical data is being ruined by DVD scratches, adding RS codes to your DVDs is probably not going to solve the fundamental problem of system administrator incompetence.

    / Seriously, these days Fark has more technically competent

  • by rew ( 6140 ) <r.e.wolff@BitWizard.nl> on Monday August 04, 2008 @03:26AM (#24463305) Homepage

    Working for a datarecovery company, I know that about half the cases where data is lost the whole drive "disappears". So, bad sectors? You can solve that problem with reed solomon! Fine! But that doesn't replace the need for backups to help you recover from: accidental removal, fire, theft and total disk failure (and probably a few other things I can't come up with right now)... .

    • by jd ( 1658 )
      Reed-Solomon is designed only for randomly-distributed errors. For errors that are in a large, contiguous block (such as a sector) you need Turbo Codes. So, whilst you are correct that the solution isn't that useful for the bulk of real-world conditions (ie: lost drives), it really isn't useful for the condition it is supposed to fix either (lost sectors). As others have noted, most drives already employ Reed-Solomon to fix random bit errors, so employing it a second time would seem to just hog space with e
  • I'm sorry, but this is stupid. Error correction is done at the level of the disk controller. You gain nothing by re-doing it at the level of the file system. You only get file-system level errors when you don't pay attention to the disk controller telling you that the disk is going bad and wait for the disk to degrade to the point where errors can't be corrected anymore.

    Install one of the many utilities that monitor disk health and replace your disk when they tell you there's a problem with your disk.

  • Forward error correction using vandermode matrices does this quite nicely. There are N-K codes that allow K blocks to be encoded in N blocks (N>K) so that any K of the N can be used to decode. Thus you can loose N-K blocks. For blocks, read tracks, or sectors, or whatever unit is typically lost in a media failure.

The ideal voice for radio may be defined as showing no substance, no sex, no owner, and a message of importance for every housewife. -- Harry V. Wade

Working...