Become a fan of Slashdot on Facebook


Forgot your password?
Data Storage Software

Automated PDF File Integrity Checking? 40

WomensHealth writes "I have about 6500 pdfs in my 'My Paperport Documents' folder that I've created over the years. As with all valuable data, I maintain off-site backups. Occasionally, when accessing a very old folder, I'll find one or two corrupted files. I would like to incorporate into my backup routine, a way of verifying the integrity of each file, so that I can immediately identify and replace with a backed-up version, any that might become corrupted. I'm not talking about verifying the integrity of the backup as a whole, instead, I want to periodically check the integrity of each individual PDF in the collection. Any way to do this in an automated fashion? I could use either an XP or OS X solution. I could even boot a Linux distro if required."
This discussion has been archived. No new comments can be posted.

Automated PDF File Integrity Checking?

Comments Filter:
  • How about... (Score:5, Informative)

    by Uncle Focker ( 1277658 ) on Thursday May 22, 2008 @03:14PM (#23509666)
    Maintaining a database of md5 checksum on the archived versions of the files and periodically check your live versions against it?
    • Re:How about... (Score:5, Informative)

      by ZephyrXero ( 750822 ) <zephyrxero@yah o o . c om> on Thursday May 22, 2008 @03:20PM (#23509760) Homepage Journal
      It sounds more like what he needs is to take an md5sum of new files when they are added to the archive and verify any changes to them are made by a user specifically overwriting the file rather than some sort of software/hardware corruption as he's apparently experiencing. The md5 part is easy to automate, however the second part may require a human eye :/
    • Re:How about... (Score:4, Informative)

      by Ritchie70 ( 860516 ) on Thursday May 22, 2008 @03:25PM (#23509820) Journal
      For Windows, Microsoft has a free command line tool, "FCIV.EXE", that will do this (MD5 and/or SHA) and save it all in an XML database for you. It will also then validate the files against that database.

      It's part of one of the resource kits.
    • Re: (Score:2, Informative)

      by TheRaven64 ( 641858 )
      MD5 gives you error detection, but not correction. You'd be better off with par2 for this kind of thing. When you add a file, run the par2 utility to generate the check file. On OS X, do this with a Folder Action whenever a new file is created with a .pdf extension. Then just set up a cron job that runs every month or so and attempts to verify / repair the files. Make sure you check the output of this, since silent data corruption is usually a sign that the drive is on its way out.
      • I would like to incorporate into my backup routine, a way of verifying the integrity of each file, so that I can immediately identify and replace with a backed-up version, any that might become corrupted.

        There is no article to NOT read here, buddy. And PAR(2) could be the worst suggestion for such a situation as I have ever heard. Parity is meant to work over a finite set of data. This guy has variable amounts of PDF's. You just added a layer of complexity (you'd need to somehow define "sets" of PDF's),
    • by mrmeval ( 662166 )
      md5 is good but computationally intensive. It would be good to make one at first but is there a known way to detect a bad file that takes less time? I don't know if a simple CRC or even modulo-11 would be bad or good.

      I'm definitely not a programmer nor math geek. :(
  • by Anonymous Coward
    Remote backup with a notification of changed files and versions of all previous files for restoration.
  • quick script (Score:3, Interesting)

    by debatem1 ( 1087307 ) on Thursday May 22, 2008 @03:17PM (#23509714)
    wouldn't be too hard to write an inotify script that stores a backup of the file and an md5sum whenever you drop a file in. wouldn't help you recover an already corrupt document, but it would help you to stop it in the future. a tie-in to the actions menu would make it more usable, but that's a bit more effort, and such solutions probably already exist.
  • Sure there's a way (Score:3, Interesting)

    by b4dc0d3r ( 1268512 ) on Thursday May 22, 2008 @03:29PM (#23509882)
    There are PDF libraries out there - write a wrapper that loads a file, and when it gets to the end without error emits a 0 "no error" return code, and any errors result in a non-zero code.

    Or maybe there are other cmd-line tools which issue a "failed to load" error. That's where I'd look first. Like a tool to strip content out of a PDF - script it so it outputs to /dev/null and check the exit code. I'd be surprised if there were a ready-made solution for this somewhere.
  • md5sum (Score:2, Insightful)

    by Nozsd ( 1080965 )
    md5sum *.pdf > sums
    md5sum -c sums

    Not exactly automated, but I wouldn't exactly call typing 2 lines to be manual labor; and once you've got the sums you really just need the second line.

    Put something like this in a shell script and you can make it automatically replace files that fail a hash check with a good backup. Use perl, python, or whatever, and you can make it work across Windows, OS X, and *nix.
    • md5sum *.pdf > sums
      md5sum -c sums

      Not exactly automated, but I wouldn't exactly call typing 2 lines to be manual labor; and once you've got the sums you really just need the second line.

      That assumes that all the PDF's start out valid, and will never be validly changed. What you really want is something like just using ghostscript to render each PDF to a temporary image, and then an automated check to make sure the image isn't 100% blank. (Or, just accepting the result if ghostscript doesn't exit with an

      • by xtracto ( 837672 ) *
        That's even easier. Pretty much just a bash one liner. Well, maybe three or four liner if you want it to be readable...)

        haha... you should see my R-commandscript-sed-awk-paste-echo-forloop- bash one liners I did to process some R data analysis and make it latex-table-ready and their respective graphics =oD

        Yay for Linux... that was teh k1ll3r app that made me not run windows at work
      • If you were using perl, you could use the PDF::Reuse library or PDF::API2 to do all kinds of crap. If it's not a valid pdf, the libraries throw all kinds of errors when you attempt to open the file. With that, you can even look at things like the number of pages, the content on the pages, etc.

        Mind you, if the rendering is fubared, like a font problem or something, so the page looks like crap, it may still be a valid pdf and pass through any sort of check with no problem. A corrupted image will still show up
  • use ZFS (Score:3, Informative)

    by larry bagina ( 561269 ) on Thursday May 22, 2008 @03:55PM (#23510312) Journal
    it has built in integrity checking and stuff.
    • Right... so learn BSD or lock your self into a proprietary operating system, and use an experimental filesystem (granted BSDs experimental is another mans rocks rock solid, but if you go the Mac route its not quite as safe).

  • Just use the Linux md5sum utility:

    Create checksums: md5sum file > file.md5
    Test: md5sum -c file.md5

    Or use a compressor: bzip2 file
    Test: bzip2 -tv file.bz2

  • Multivalent (Score:2, Informative)

    by Anonymous Coward
    I once found this: []

    The Multivalent suite of document tools includes a command-line utility that validates PDFs. It can be run across a whole directory of files too, so should do the trick.

    Written in Java, so should run anywhere.
  • Use git. []

    Check them all into a repository, then periodically run git-fsck. Git hashes all files in a repository with SHA-1 when they're first committed, and git-fsck recalculates the hashes.

  • by gblues ( 90260 ) on Thursday May 22, 2008 @04:47PM (#23510998)
    The OP is not asking about preventing future corruption; the OP wants an automated way to sift through 6500 PDFs to find corrupt (or at least, potentially corrupt) PDF files without having to open each one by hand.

    MD5 generates a hash of the binary data of the PDF file. A MD5 hash will not tell you if a PDF file is corrupt; it is only useful once the integrity of the PDF has been confirmed. After the integrity is confirmed, then you can make your database of MD5 hashes, to detect future corruption.

    To test that a given file is a valid PDF, you could probably use something like pdf2ps; you don't care about the PostScript output per se, but you'd be testing for an error code. If pdf2ps returns an error code, you set the file aside for manual verification. This should, if nothing else, whittle down that 6500 PDF archive into a much smaller subset that you can feasibly test manually using Adobe Acrobat. And those, if you "refry" them (print them back to the Adobe PDF printer to re-PDF it), will probably fix the PDF so it passes the pdf2ps test.

    I will leave the actual writing of a script to recurse through your directories, feed each PDF file through pdf2ps, and test for error codes, as an exercise to the OP. Now that you have an idea of what to do, actually doing it should be pretty simple.
    • The OP is not asking about preventing future corruption; the OP wants an automated way to sift through 6500 PDFs to find corrupt (or at least, potentially corrupt) PDF files without having to open each one by hand.

      If that is indeed the case, and he's repeatedly encountering corrupt files, then I'd suggest he's asked the wrong question.

      As for pdf2ps, I'm unfamiliar with what error codes it returns, but if it's useful as you state, then it's worth pointing out that all the utilities he'll need (including md5,
  • PDF validation (Score:5, Informative)

    by Peter H.S. ( 38077 ) on Thursday May 22, 2008 @05:01PM (#23511182) Homepage
    Here is a java command line tool designed to check the validity of 1000's of pdf files: []

    There is also a tool for repairing some pdf errors: []

    Never used it myself, just stumbled over it when I was searching for some pdf software.

  • Ghostscript (Score:3, Interesting)

    by Marillion ( 33728 ) <> on Thursday May 22, 2008 @05:08PM (#23511272)

    Many are commenting on using checksums (MD5, SHA, ....) to validate the file hasn't changed. This is good. However, none of these can actually tell if the PDF was is good to begin with. I would suggest using Ghostscript to verify that the PDF is properly structured. Ghostscript is an opensource tool that can convert PDF and Postscript files to several other formats. If Ghostscript can interpret the PDF file without errors, then odds are the file is good too.

  • Prevention, first (Score:3, Insightful)

    by Anonymous Coward on Thursday May 22, 2008 @05:26PM (#23511512)
    One of the things that strikes me about the posts thus far is that nobody has asked the first and most important question: *WHY* are the files becoming corrupted? And what is the nature of the corruption?

    From a general accessibility perspective, the age of the folders shouldn't matter, nor should the age of the files contained within them: A properly operating file system will maintain the integrity of the files it tracks indefinitely, assuming the underlying media is sound and all related hardware is functioning correctly.

    Certainly, for verification of critical data, checksums are a good measure so long as they are done at the time of file creation, after verification that the files are good, but in light of the reported symptoms, I'd investigate the source of the problem first, and correct it. Then I'd make provisions for checksumming, in addition to regular file system health checks, before backing up those files and their checksums.

    Proceeding from a "bottom-up point of view": For Windows-based systems, regardless of the file system in use (although I'd hope you'd be using NTFS), regular file system scans via CHKDSK are a must. The same applies to the file systems of other OS': Run whatever utilities are available to verify the integrity of the file system on each hard drive regularly.

    In addition, most hard drive manufacturers have utilities that you can download for free that will non-destructively scan the media for grown defects. These are typically available as ISOs: Make a CD, boot from it, and follow the instructions carefully, preferably after making a full, verified backup. Naturally, you'll have to know the manufacturer(s) of your hard drives.

    Once you've identified the cause of the corruption, and corrected it, then you can (and should) make provisions for checksums.

    But, there are other things that you can, and should check as well. Make sure that the AC power to your computer is sound from an electrical perspective and that the power available is sufficient for the load being placed upon it. Buy a good UPS if you don't have one already, and if you do have one, test it.

    Then, test the power supply in the computer to ensure that it is providing adequate power.

    Then test the memory in your computer.

    Then test the hard drives, both surface level and file system level.

    Hope this helps.
    • I was going to post exactly this. Files randomly becoming corrupted? Maybe if I had 5,000 Chinese kids remembering numbers, but data shouldn't just change on a computer, whether it's over the wire, on disk, or in memory. Treat the disease, not the symptom.

    • by CaseyB ( 1105 )
      Darn right! This is like asking how to efficiently procure and install pots throughout your house to catch all the water dripping from your ceiling.
  • If you've got files on your computer that you only read, never write, and those files are getting corrupted, then it sounds like you have a problem with your filesystem, or a problem with your hardware. You need to find and fix the problem with the filesystem or hardware, not apply band-aids to PDF files if the problem has nothing to do with the PDF format per se.

    Another possibility would be that you're using buggy software that is supposed to open PDF files in read-only mode, but actually corrupts them.

    • Re: (Score:3, Insightful)

      by Peter H.S. ( 38077 )
      Personally I think that the pdf files were dodgy from the beginning, but that the errors just show up when using newer generation pdf-viewing software. That could explain why it only seems to be very old files that are corrupted, a more random system error or a systematic software problem would corrupt newer files too.

      Your suggestions are of course valid, it must considered a high priority to find out whether the system is corrupting the files, or if they were bad from the beginning.

  • ...bittorrent? It has built in file integrity checking. Simply create a torrent for the files and have the backup source seed. Then periodically check the integrity of the files (many clients can let you force a recheck of file integrity) and it will not only identify corrupt files, but automatically download replacements from the backup. If you have to add files to the backup, it does require you to make a new torrent. Still, if you set things up right it does prove to be a rather elegant solution, I've
  • This could be anywhere from just works to very effective, but a distributed version control system, so copied can be kept on multiple systems easily, and something that can check the integrity of files. [] seems interesting, look for distributed and atomic commits, and signed tags (though this by itself doesn't guarantee it catches file errors right away).

    I use and love Git, and though Windows support is there, it is questionable I have heard.
  • XPdf (Score:2, Informative)

    by dtrumpet ( 1294668 )
    XPdf comes with a 'pdfinfo' command line utility. It returns non-zero if the PDF is corrupt. Should be somewhat efficient and very easy to automate.

1 1 was a race-horse, 2 2 was 1 2. When 1 1 1 1 race, 2 2 1 1 2.