Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Data Storage

Error-Proofing Data With Reed-Solomon Codes 196

ttsiod recommends a blog entry in which he details steps to apply Reed-Solomon codes to harden data against errors in storage media. Quoting: "The way storage quality has been nose-diving in the last years, you'll inevitably end up losing data because of bad sectors. Backing up, using RAID and version control repositories are some of the methods used to cope; here's another that can help prevent data loss in the face of bad sectors: Hardening your files with Reed-Solomon codes. It is a software-only method, and it has saved me from a lot of grief..."
This discussion has been archived. No new comments can be posted.

Error-Proofing Data With Reed-Solomon Codes

Comments Filter:
  • by Rene S. Hollan ( 1943 ) on Sunday August 03, 2008 @05:52PM (#24459631)

    ... at least CDROMs employ RS codes.

  • Re:ZFS? (Score:5, Informative)

    by xquark ( 649804 ) on Sunday August 03, 2008 @05:59PM (#24459719) Homepage

    checksums really only help in detecting errors. Once you've found errors, if you have an exact redundancy somewhere else you can repair the errors. What reed-solomon codes do is provide the error detecting ability but also the error correcting ability whilst at the same time reducing the amount of redundancy required to a near theoretical minimum.

    btw checksums have limits on how many errors they can detect within lets say a file or other kind of block of data. A simple rule of thumb (though not exact) is that 16 and 32 bit checksums can detect upto 16,32 bit errors respectively anymore and the chance of not detecting every bit error goes up, it could even result in not finding any errors at all.

  • by Architect_sasyr ( 938685 ) on Monday August 04, 2008 @12:53AM (#24462527)
    From CD-ROM wiki:

    A CD-ROM sector contains 2352 bytes, divided into 98 24-byte frames. The CD-ROM is, in essence, a data disk, which cannot rely on error concealment, and therefore requires a higher reliability of the retrieved data. In order to achieve improved error correction and detection, a CD-ROM has a third layer of Reed-Solomon error correction.[1] A Mode-1 CD-ROM, which has the full three layers of error correction data, contains a net 2048 bytes of the available 2352 per sector. In a Mode-2 CD-ROM, which is mostly used for video files, there are 2336 user-available bytes per sector. The net byte rate of a Mode-1 CD-ROM, based on comparison to CDDA audio standards, is 44.1k/s×4B×2048/2352 = 153.6 kB/s. The playing time is 74 minutes, or 4440 seconds, so that the net capacity of a Mode-1 CD-ROM is 682 MB.

    I'd say that's a yes.

  • by Anonymous Coward on Monday August 04, 2008 @01:11AM (#24462631)

    Yeah, but is there another problem elsewhere in the system? I have an el-cheapo USB-PATA adapter with a MTBF (mean time to bit flip) of about 2 hours. Every other disk was ruined, and I only knew because of a sick little obsession with par2. Disk data is ECC'd, PATA data is parity checked, and USB data is checksummed. Still, inside one little translator chip, all that can be ruined. and that's why data integrity MUST be an operating/file system service.

  • Re:Speed? (Score:3, Informative)

    by Zadaz ( 950521 ) on Monday August 04, 2008 @01:18AM (#24462657)

    Well since my $100 Radio Shack CD [wikipedia.org] player I bought in 1990 could do it in real-time I'm guessing that the requirements are pretty low. In fact a lot of hardware already uses it.

    If you read the rest of the page you find out it's very ingenious and efficient at doing what it does.

    While it's certainly not new (it's from 1960) or unused (hell, my phone uses it to read QR codes) I'm sure its something that has been under the radar of a lot of Slashdot readers, so I'll avoid making a "slow news day" snark.

  • Re:Interesting (Score:3, Informative)

    by Anonymous Coward on Monday August 04, 2008 @01:26AM (#24462711)

    The cross platform program dvdisaster [dvdisaster.net] will add extra information to your DVD as an error correcting code. Alternatively, you can make a parity file for an already-existing DVD and save it somewhere else.

    It actually has a GUI too, so it must be user friendly.

  • by MoFoQ ( 584566 ) on Monday August 04, 2008 @01:30AM (#24462733)

    quickpar [quickpar.org.uk] especially has been in use on usenet/newsgroups for years....o yea...forgot....they are trying to kill it.

    anyways...there's also dvdisaster [dvdisaster.net] which now has several ways of "hardening".
    one of them seems to catch my attention: adds error correction data to a CD/DVD (via a disc image/iso)

  • Re:ZFS? (Score:3, Informative)

    by bobbozzo ( 622815 ) on Monday August 04, 2008 @01:33AM (#24462745)

    Mirroring is RAID-1, not 0.

  • by Solandri ( 704621 ) on Monday August 04, 2008 @01:49AM (#24462837)
    That's a pretty fundamental part of information theory - communication in a noisy channel [wikipedia.org]. If your communications (or data storage) are digital, you can overcome any level of random noise (error) at the cost of degraded transmission rate (increased storage requirement). Before CDs, it was (and still is) most prevalent in modem protocols and hard drives. Modern hard drives would probably be impossible without it - read errors are the norm, not the exception [storagereview.com]. It's just hidden from the high-level software by multiple levels of error correction in the low-level firmware.
  • by Solandri ( 704621 ) on Monday August 04, 2008 @02:02AM (#24462911)
    Data is stored linearly on a CD (and DVD). So the data can survive huge scratches running from the center to edge, but is very susceptible to radial scratches rotated around the center. If you think of a CD as an old-style phonograph record, you can scratch across the grooves and the error correction will fix it; but scratching along a groove will quickly corrupt the data because the scratch will destroy sequential data (and its ECC). That's why they recommend cleaning CDs by wiping from the center out, never in a circular motion.
  • by DrJimbo ( 594231 ) on Monday August 04, 2008 @02:11AM (#24462959)
    ... even though both TFA and PAR use Reed-Solomon.

    The difference is that TFA interleaves the data so it is robust against sector errors. A bad sector contains bytes from many different data blocks so each data block only loses one byte which is easy to recover from. If you use PAR and encounter a bad sector, you're SOL.

    PAR was designed to solve a different problem and it solves that different problem very well but it wasn't designed to solve the problem that is addressed by TFA. Use PAR to protect against "the occasional bit error" as you suggest, but use the scheme given in TFA to protect against bad sectors.

  • Bose Chaudhuri (Score:1, Informative)

    by ishmalius ( 153450 ) on Monday August 04, 2008 @02:12AM (#24462969)
    These codes, http://en.wikipedia.org/wiki/BCH_code [wikipedia.org] , are far superior.. However, both Miller code and these pale in comparison to Low Density Parity Check codes. http://en.wikipedia.org/wiki/Low-density_parity-check_code [wikipedia.org]
  • Re:ZFS? (Score:4, Informative)

    by this great guy ( 922511 ) on Monday August 04, 2008 @03:56AM (#24463433)

    I have been a ZFS user for a while and know a lot of its internals. Let me comment on what you said.

    checksums really only help in detecting errors.

    Not in ZFS. When the checksum reveals silent data corruption, ZFS attempts to self-heal itself by rewriting the sector with a known good copy. Self-healing is possible if you are using mirroring, raidz (single parity), raidz2 (dual parity), or even a single disk (provided the copies=2 filesystem attribute is set). The self-healing algorithm in the raidz and raidz2 cases is actually interesting as it is based on combinatorial reconstruction: ZFS makes a series of guesses as to which drive(s) returned bad data, it reconstructs the data block from the other drives, and then validates whether this guess was correct or not by verifying the checksum.

    checksums have limits on how many errors they can detect.

    All the ZFS checksumming algorithms (fletcher2, fletcher4, SHA-256) generate 256-bit checksums. The default is fletcher2 and offers very good error detection (even errors affecting more than 256 bits of data) assuming unintentional data corruption (the fletcher family are not a cryptographic hash algorithms, it is actually possible to intentionally find collisions). SHA-256 is collision-resistant therefore it will in practice detect all data corruptions. It would be computationally infeasible to come up with a corrupted data block that still matches the SHA-256 checksum.

    A good intro to the ZFS capabilities are these slides [opensolaris.org]

  • by xquark ( 649804 ) on Monday August 04, 2008 @04:20AM (#24463543) Homepage

    Your comment is incorrect, RS codes are a subset of BCH codes. In fact BCH codes are a general definition of a class of algebraic codes nothing more. your comment about one being better than the other for a specific purpose is wrong.

    Think of BCH codes as "vehicle" and RS codes as "The Bugatti Veyron" that is the relationship.

  • by Anonymous Coward on Monday August 04, 2008 @04:43AM (#24463657)

    par is perfectly able to correct whole sectors, or even whole files missing, as long as the missing data is less than the amount of parity data. But then, thats true for all algorithms.

  • by Anonymous Coward on Monday August 04, 2008 @06:37AM (#24464193)

    Those who sacrifice speed for data integrity deserve neither - BF

    Those who sacrifice data integrity for speed deserve neither - Sys Admin

  • by brix ( 27642 ) on Monday August 04, 2008 @08:21AM (#24464805)

    I do the same as the AC, but I keep a copy of the smallest par2 from the set on my local drive for recovery (and back these up as well). If a CD/DVD ever goes bad to the point it won't even read the FS, you can still create an ISO file of it including all errors. The par2 recovery can be done using the ISO image at that point, and as long as the damage to the DVD didn't exceed the redundancy level, full recovery of the original files is possible.

    Note that you aren't recovering the ISO itself at this point, you are using the ISO as input to par2repair (or the GUI). The recovery is done using the blocks of the original files and pars. The end result is the original file(s) stored on the disc.

    It sounds like dvdisaster does something similar, but I've been using this technique with pars for a few years now.

  • by complete loony ( 663508 ) <Jeremy@Lakeman.gmail@com> on Monday August 04, 2008 @09:00AM (#24465223)
    AFAIK when a disk is scratched you are more likely to get a tracking error than a failure to decode the audio.
  • by mentaldrano ( 674767 ) on Monday August 04, 2008 @09:22AM (#24465501)

    Radial scratches go from center to edge, azimuthal scratches go around the center.

  • by Fnord666 ( 889225 ) on Monday August 04, 2008 @09:28AM (#24465597) Journal

    Eventually, I started using ICE ECC, http://www.ice-graphics.com/ICEECC/IndexE.html [ice-graphics.com], free as in beer, to enhance my DVD backups of stuff like photos and data. IIRC, I tested it's ability to reconstruct missing files and it seemed OK at the time.

    Unfortunately this software looks like it is closed source and windows only. A program to apply error correcting codes to your archived files is only useful if you still have a platform to run it on. Hopefully 15 years from now when you go to recover your files you have an old windows machine still available for use.

"Ninety percent of baseball is half mental." -- Yogi Berra

Working...