Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
Data Storage

Error-Proofing Data With Reed-Solomon Codes 196

ttsiod recommends a blog entry in which he details steps to apply Reed-Solomon codes to harden data against errors in storage media. Quoting: "The way storage quality has been nose-diving in the last years, you'll inevitably end up losing data because of bad sectors. Backing up, using RAID and version control repositories are some of the methods used to cope; here's another that can help prevent data loss in the face of bad sectors: Hardening your files with Reed-Solomon codes. It is a software-only method, and it has saved me from a lot of grief..."
This discussion has been archived. No new comments can be posted.

Error-Proofing Data With Reed-Solomon Codes

Comments Filter:
  • by xquark ( 649804 ) on Sunday August 03, 2008 @05:54PM (#24459657) Homepage

    It really depends where you store the FEC, some techniques store the fec separately others concatenate and others interleave the FEC. Each method has its own advantages and disadvantages.

  • by XorNand ( 517466 ) * on Monday August 04, 2008 @12:55AM (#24462541)
    Please, please stop thinking of version control as some sort of backup. When we initially started mandating the use of version control software, developers would just using the "commit" button instead of the "save" button. It makes it *much* more difficult to traverse through the repo when you have three dozen commits per day, per developer, each commented with "ok. really should be fixed now." The worst offenders were issued an Etchasketch for a week while their notebooks went in for service *cough*. Problem solved.
  • huh??? (Score:3, Insightful)

    by Jane Q. Public ( 1010737 ) on Monday August 04, 2008 @01:18AM (#24462661)
    If you think storage quality has been nose-diving, then you haven't been around very long. It just isn't so.. and there really is not much more I can say to add to that.

    I have been around this industry quite a while, and I call bullshit on that.
  • by Futurepower(R) ( 558542 ) on Monday August 04, 2008 @01:19AM (#24462665) Homepage
    "... data integrity MUST be an operating/file system service."

    I agree. I'm willing to have a small loss in speed and a small increase in price to have better data integrity.

    There is already data integrity technology embedded in hard drives, and I support making it more robust.
  • by dgatwood ( 11270 ) on Monday August 04, 2008 @01:26AM (#24462715) Homepage Journal

    Well, you shouldn't commit until you believe you have it in a state where the changes are usable (i.e. don't break the tree), but beyond that, I'd rather see more commits of smaller amounts of code than giant commits that change ten thousand things. If you end up having to back out a change, it's much easier if you can easily isolate that change to a single commit. My rule is commit early, commit often. I'm not the only one, either:

    http://blog.red-bean.com/sussman/?p=96 [red-bean.com]

  • Re:Harden Files (Score:1, Insightful)

    by Anonymous Coward on Monday August 04, 2008 @01:37AM (#24462767)

    It never ceases to amaze me that the juvenile "heh heh heh.. he said 'harden'" response always gets modded funny. Mods, here's a tip: These kinds of jokes aren't funny unless you are a) 13 years old or b) really drunk.

  • by ceswiedler ( 165311 ) * <chris@swiedler.org> on Monday August 04, 2008 @01:37AM (#24462771)

    The best solution is for developers to use their own private branches. Then they can commit as much as they want, and integrate into the main branch when they're ready. Unfortunately subversion has crappy support for integration (even with version 1.5 AFAICT) compared to something like perforce.

  • by Pseudonym ( 62607 ) on Monday August 04, 2008 @01:59AM (#24462891)

    Ok, lets assume its a 128-bit hash. For a 1GB file how many combinations of 1GB will produce the same hash?

    You're asking the wrong question.

    The right question is: Given a 1Gb file, how much "mutation" do you have to do to it to produce a file with the same hash? And the answer to that is: Enough to make the data unrecoverable no matter what you do.

  • by rew ( 6140 ) <r.e.wolff@BitWizard.nl> on Monday August 04, 2008 @03:26AM (#24463305) Homepage

    Working for a datarecovery company, I know that about half the cases where data is lost the whole drive "disappears". So, bad sectors? You can solve that problem with reed solomon! Fine! But that doesn't replace the need for backups to help you recover from: accidental removal, fire, theft and total disk failure (and probably a few other things I can't come up with right now)... .

  • by twistedcubic ( 577194 ) on Monday August 04, 2008 @07:17AM (#24464365)
    You mean to say injective, though being bijective is sufficient.
  • by catenx ( 1101105 ) on Monday August 04, 2008 @11:07AM (#24467085)
    The biggest limitation of PAR2 for me is the lack of directory handling. You can only create and verify parchives for the files within a directory. One solution is a script that runs the PAR creation or verification for each subdirectory but this is hardly elegant. Hard to use backup is backup that isn't used. A better solution is what ICE ECC offers.

    Agreeing with Fnord666, the software does not use an open algorithm. The general tone of the site is "use this software it is awesome, don't argue". There doesn't seem to be verification of its awesomeness. Furthermore, the program author's tone in many of the forum posts is abrasive and near combative when people question it.

    PAR2 is proven but limited. This /. post is the closest I've seen to addressing the progression of software past Parchives, or at least enhancing the PAR spec for new needs (directory traversing for example).

    Is this really the case, that no one has taken PAR2 to the next level? Judging from the lack of links in these comments to the flamebaiting posts of "we've been doing this for years" there isn't much progress.

    I want PAR3.
  • by PitaBred ( 632671 ) <.gro.sndnyd.derbatip. .ta. .todhsals.> on Monday August 04, 2008 @11:09AM (#24467109) Homepage

    Thing is, the "overcompressed" MP3 recorders are good enough. Most people use them to record lecture notes, or a meeting, or just talking to themselves. Those are about the only reasons to really need a portable recorder, and for those uses, mp3 is very good. Just because it's low bitrate doesn't mean it's bad, and just because your DAT recorder had higher quality doesn't mean it's more fit for the purposes it would be used for. Seriously... running and recording? Why would you ever want to do that?

For large values of one, one equals two, for small values of two.

Working...