Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Ask Slashdot: How Do I De-Dupe a System With 4.2 Million Files? 440

First time accepted submitter jamiedolan writes "I've managed to consolidate most of my old data from the last decade onto drives attached to my main Windows 7 PC. Lots of files of all types from digital photos & scans to HD video files (also web site backup's mixed in which are the cause of such a high number of files). In more recent times I've organized files in a reasonable folder system and have an active / automated backup system. The problem is that I know that I have many old files that have been duplicated multiple times across my drives (many from doing quick backups of important data to an external drive that later got consolidate onto a single larger drive), chewing up space. I tried running a free de-dup program, but it ran for a week straight and was still 'processing' when I finally gave up on it. I have a fast system, i7 2.8Ghz with 16GB of ram, but currently have 4.9TB of data with a total of 4.2 million files. Manual sorting is out of the question due to the number of files and my old sloppy filing (folder) system. I do need to keep the data, nuking it is not a viable option.
This discussion has been archived. No new comments can be posted.

Ask Slashdot: How Do I De-Dupe a System With 4.2 Million Files?

Comments Filter:
  • CRC (Score:5, Informative)

    by Spazmania ( 174582 ) on Sunday September 02, 2012 @08:32AM (#41205117) Homepage

    Do a CRC32 of each file. Write to a file one per line in this order: CRC, directory, filename. Sort the file by CRC. Read the file linearly doing a full compare on any file with the same CRC (these will be adjacent in the file).

    • Re:CRC (Score:5, Informative)

      by Anonymous Coward on Sunday September 02, 2012 @08:36AM (#41205157)

      s/CRC32/sha1 or md5, you won't be CPU bound anyway.

      • Re:CRC (Score:5, Informative)

        by Kral_Blbec ( 1201285 ) on Sunday September 02, 2012 @08:38AM (#41205173)
        Or just by file size first, then do a hash. No need to compute a hash to compare a 1mb file and a 1kb file.
        • Re:CRC (Score:5, Informative)

          by caluml ( 551744 ) <slashdotNO@SPAMspamgoeshere.calum.org> on Sunday September 02, 2012 @08:58AM (#41205331) Homepage
          Exactly. What I do is this:

          1. Compare filesizes.
          2. When there are multiple files with the same size, start diffing them. I don't read the whole file to compute a checksum - that's inefficient with large files. I simply read the two files byte by byte, and compare - that way, I can quit checking as soon as I hit the first different byte.

          Source is at https://github.com/caluml/finddups [github.com] - it needs some tidying up, but it works pretty well.

          git clone, and then mvn clean install.
          • Re:CRC (Score:5, Insightful)

            by bzipitidoo ( 647217 ) <bzipitidoo@yahoo.com> on Sunday September 02, 2012 @09:38AM (#41205601) Journal

            Part 2 of your method will quickly bog down if you run into many files that are the same size. Takes (n choose 2) comparisons, for a problem that can be done in n time. If you have 100 files all of one size, you'll have to do 4950 comparisons. Much faster to compute and sort 100 checksums.

            Also, you don't have to read the whole file to make use of checksums, CRCs, hashes and the like. Just check a few pieces likely to be different if the files are different, such as the first and last 2000 bytes. Then for those files with matching parts, check the full files.

            • Re:CRC (Score:4, Insightful)

              by K. S. Kyosuke ( 729550 ) on Sunday September 02, 2012 @10:05AM (#41205741)
              Why not simply do it adaptively? Two or three files of the same size => check by comparing, more files of the same size => check by hashing.
            • by HiggsBison ( 678319 ) on Sunday September 02, 2012 @12:08PM (#41206511)

              If you have 100 files all of one size, you'll have to do 4950 comparisons.

              You only have to do 4950 comparisons if you have 100 unique files.

              What I do is pop the first file from the list, to use as a standard, and compare all the files with it, block by block. If a block fails to match, I give up on that file matching the standard. The files that don't match generally don't go very far, and don't take much time. For the ones that match, I would have taken all that time if I was using a hash method anyway. As for reading the standard file multiple times: It goes fast because it's in cache.

              The ones that match get taken from the list. Obviously I don't compare the one which match with each other. That would be stupid.

              Then I go back to the list and rinse/repeat until there are less than 2 files.

              I have done this many times with a set of 3 million files which take up about 600GB.

            • by yakatz ( 1176317 )
              And use a Bloom Filter [wikipedia.org] to easily eliminate many files without doing a major comparison of all 100 checksums.
          • Added benefit, when sorting by filesize you can hit the biggest ones first. Depending on your dataset, most of your redundant data might be in just a few duplicated files.

          • Re:CRC (Score:4, Informative)

            by inKubus ( 199753 ) on Monday September 03, 2012 @02:28AM (#41211251) Homepage Journal

            For the lazy, here are 3 more tools:
            fdupes [caribe.net], duff [sourceforge.net], and rdfind [pauldreik.se].

            Duff claims it's O(n log n), because they:

            Only compare files if they're of equal size.
            Compare the beginning of files before calculating digests.
            Only calculate digests if the beginning matches.
            Compare digests instead of file contents.
            Only compare contents if explicitly asked.
             

        • Re: (Score:3, Informative)

          by belg4mit ( 152620 )

          Unique Filer http://www.uniquefiler.com/ [uniquefiler.com] implements these short-circuits for you.

          It's meant for images but will handle any filetype, and even runs under WINE.

        • by Anonymous Coward

          Only hash the first 4K of each file and just do them all. The size check will save a hash only for files with unique sizes, and I think there won't be many with 4.2M media files averaging ~1MB. The second near-full directory scan won't be all that cheap.

      • Re:CRC (Score:4, Insightful)

        by Joce640k ( 829181 ) on Sunday September 02, 2012 @09:07AM (#41205401) Homepage

        s/CRC32/sha1 or md5, you won't be CPU bound anyway.

        Whatever you use it's going to be SLOW on 5TB of data. You can probably eliminate 90% of the work just by:
        a) Looking at file sizes, then
        b) Looking at the first few bytes of files with the same size.

        After THAT you can start with the checksums.

        • by WoLpH ( 699064 )

          Indeed, I once created a dedup script which basically did that.

          1. compare the file sizes
          2. compare the first 1MB of the file
          3. compare the last 1MB of the file
          4. compare the middle 1MB in the file

          It's not a 100% foolproof solution but it was more than enough for my use case at that time and much faster than getting checksums.

        • Re:CRC (Score:5, Informative)

          by blueg3 ( 192743 ) on Sunday September 02, 2012 @10:06AM (#41205745)

          b) Looking at the first few bytes of files with the same size.

          Note that there's no reason to only look at the first few bytes. On spinning disks, any read smaller than about 16K will take the same amount of time. Comparing two 16K chunks takes zero time compared to how long it takes to read them from disk.

          You could, for that matter, make it a 3-pass system that's pretty fast:
          a) get all file sizes; remove all files that have unique sizes
          b) compute the MD5 hash of the first 16K of each file; remove all files that have unique (size, header-hash) pairs
          c) compute the MD5 hash of the whole file; remove all files that have unique (size, hash) pairs

          Now you have a list of duplicates.

          Don't forget to eliminate all files of zero length in step (a). They're trivially duplicates but shouldn't be deduplicated.

    • by wisty ( 1335733 )

      This is similar to what git and ZFS do (but with a better hash, some kind of sha I think).

    • by Pieroxy ( 222434 )

      Exactly.

      1. Install MySQL,
      2. create a table (CRC, directory, filename, filesize)
      3. fill it in
      4. play with inner joins.

      I'd even go down the path of forgetting about the CRC. Before deleting something, do a manual check anyways. CRC has the advantage of making things very straightforward but is a bit more complex to generate.

      • Use SHA-1 instead of CRC.

        • Re: (Score:3, Interesting)

          by Anonymous Coward
          With 4.2 million files, given the probability of SHA-1 collisions plus the birthday paradox and there will be around 500 SHA-1 collisions which are not duplicates. SHA-512 reduces that number to 1.
          • by Goaway ( 82658 )

            I don't know where you are finding these numbers, but they are about as wrong as it is possible to get.

            There is no known SHA-1 collision yet in the entire world. You're not going to find 500 of them in your dump of old files.

          • Re:CRC (Score:5, Informative)

            by xigxag ( 167441 ) on Sunday September 02, 2012 @02:10PM (#41207491)

            With 4.2 million files, given the probability of SHA-1 collisions plus the birthday paradox and there will be around 500 SHA-1 collisions which are not duplicates.

            That's totally, completely wrong. The birthday problem isn't a breakthrough concept, and the probability of random SHA-1 collisions is therefore calculated with it in mind. The number is known to be 1/2^80. This is straightforwardly derived from the total number of SHA-1 values, 2^160, which is then immensely reduced by the birthday paradox to 2^80 expected hashes required for a collision. This means that a hard drive with 2^80 or 1,208,925,819,614,629,174,706,176 files would have on average ONE collision. Note that this is a different number than the number of hashes one has to generate for a targeted cryptographic SHA-1 attack, which with best current theory is on the order of 2^51 [wikipedia.org] for the full 80-round SHA-1, although as Goaway has pointed out, no such collision has yet been found.

            Frankly I'm at a loss as to how you arrived at 500 SHA-1 collisions out of 4.2 million files. That's ludicrous. Any crypto hashing function with such a high collision rate would be useless. Much worse than MD5, even.

      • Re: (Score:3, Interesting)

        by vlm ( 69642 )

        4. play with inner joins.

        Much like there's 50 ways to do anything in Perl, there's quite a few ways to do this in SQL.

        select filename_and_backup_tape_number_and_stuff_like_that, count(*) as number_of_copies
        from pile_of_junk_table
        group by md5hash
        having number_of_copies > 1

        Theres another strategy where you mush two tables up against each other... one is basically the DISTINCT of the other.

        triggers are widely complained about, but you can implement a trigger system (or psuedo-trigger, where you make a wrapper function in your app)

      • the CRC is not just a bit more complex to generate, it forces you to read the entire file. Reading 5 TB data takes quite a lot more time than reading a filesystem with 4M files. So yes, delay the CRC, play with filesizes first.

    • DO NOT do a CRC, do a hash. Too many chances of collision with a CRC.

      But that still won't fix his real problem - he's got lots of data to process and only one system to process it with.

      • Did you read the bit about "doing a full compare on any file with the same CRC"?

        The CRC is just for bringing likely files together. It will work fine.

      • Re:CRC (Score:4, Interesting)

        by igb ( 28052 ) on Sunday September 02, 2012 @09:22AM (#41205507)
        The problem isn't CRC vs secure hash, the problem is the number of bits available. He's not concerned about an attacker sneaking collisions into his filestore, and he always has the option of either a byte-by-byte comparison or choosing some number of random blocks to confirm the files are in fact the same. But 32 bits isn't enough simply because he's guaranteed to get collisions even if all the files are different, as he has more than 2^32 files. But using two different 32-bit CRC algorithms, for example, wouldn't be "secure" but would be reasonably safe. But as he's going to be disk bound, calculating an SHA-512 would be reasonable, as he can probably do that faster than he can read the data.

        I confess, if I had a modern i5 or i7 processor and appropriate software I'd be tempted to in fact calculate some sort of AES-based HMAC, as I would have hardware assist to do that.

        • ...as he has more than 2^32 files.

          4.2 million, not billion. About 2^22 files.

          • by blueg3 ( 192743 )

            Fortunately, you actually only need about 2^16 files to get collisions on a 32-bit CRC.

            • Re:CRC (Score:5, Interesting)

              by b4dc0d3r ( 1268512 ) on Sunday September 02, 2012 @10:32AM (#41205873)

              This was theorized by one of the RSA guys (Rivest, if I'm not mistaken). I helped support a system that identified files by CRC32, as a lot of tools did back then. As soon as we got to about 65k files (2^16), we had two files with the same CRC32.

              Let me say, CRC32 is a very good algorithm. So good, I'll tell you how good. It is 4 bytes long, which means in theory you can change any 4 bytes of a file and get a CRC32 collision, unless the algorithm distributes them randomly, in which case you will get more or less.

              I naively tried to reverse engineer a file from a known CRC32. Optimized and recursive, on a 333 mHz computer, it took 10 minutes to generate the first collision. Then every 10 minutes or so. Every 4 bytes (last 4, last 5 with the original last byte, last 6 with original last 2 bytes, etc) there was a collision.

              Compare file sises first, not CRC32. The s^16 estimate is not only mathematically proven, but also in the big boy world. I tried to move the community towards another hash.

              CRC32 *and* filesize are a great combination. File size is not included in the 2^16 estimate. I have yet to find two files in the real world, in the same domain (essentially type of file), with the same size and CRC32.

              Be smart, use the right tool for the job. First compare file size (ignoring things like mp3 ID3 tags, or other headers). Then do two hashes of the contents - CRC32 and either MD5 or SHA1 (again ignoring well-known headers if possible). Then out of the results, you can do a byte for byte comparison, or let a human decide.

              This is solely to dissuade CRC32 based identification. After all, it was designed for error detection, not identification. For a 4-byte file, my experience says CCITT standard CRC32 will work for identification. For 5 byte files, you can have two bytes swapped and possibly have the same result. The longer the file, the less likely it is to be unique.

              Be smart, use size and two or more hashes to identify files. And even then, verify the contents. But don't compute hashes on every file - the operating system tells you file size as you traverse the directories, so start there.

              • Re:CRC (Score:5, Insightful)

                by iluvcapra ( 782887 ) on Sunday September 02, 2012 @12:19PM (#41206619)

                First compare file size (ignoring things like mp3 ID3 tags, or other headers).

                I once had to write an audio file de-deuplicator; one of the big problems was you would ignore the metadata and the out-of-band data when you did the comparisons, but you always had to take this stuff into account when you were deciding which version of a file to keep -- you didn't want to delete two copies f a file with all the tags filled out and keep the one that was naked.

                My de-duper worked like everyone here is saying -- it cracked open wav and aiff (and Sound Designer 2) files, captured their sample count and sample format into a sqlite db, did a couple of big joins and then did some SHA1 hashes of likely suspects. All of this worked great, but once I had the list I had the epiphany that the real problem of these tools is the resolution and how you make sure you're doing exactly what the user wants.

                How do you decide which one to keep? You can just do hard links, but...

                • The users I was working with were very uncomfortable with hard links, they didn't really understand the concept and were concerned that it made it difficult to know if you were "really" throwing something away when you dragged something to the trash. (It's stupid but it was their box.)
                • Our existing backup/archival software wouldn't do the right thing with hard links, so it'd save no space on the tapes.
                • Our audio workstation software wouldn't read audio off of files that were hard links on OS X (because hard links on OSX aren't really hard links, I believe our audio workstation vendor have since resolved this).

                But let's say you can do hard links, no problem. How do you decide which instance of the file is to be kept, if you've only compared the "real" content of the file and ignored metadata? You could just give the user a big honking list of every set of files that are duplicates -- two here, three here, six here, and then let them go through and elect which one will be kept, but that's a mess and 99% of the time they're going to select a keeper on the basis of which part of the directory tree it's in. So, you need to do a rule system or a preferential ranking of parts of the directory hierarchy that tell the system "keep files you find here." Now, the files will also have metadata, so you also have to preferentially rank the files on the basis of its presence -- you might also rank files higher if your guy did the metadata tagging, because things like audio descriptions are often done with a specialized jargon that can be specific to a particular house.

                Also, it'd be very common to delete a file from a directory containing an editor's personal library, and replacing it with a hard link to a file in the company's main library -- several people would have copies of the same commercial sound, or an editor would be the recordist of a sound that was subsequently sold to a commercial library, or whatever. Is it a good policy to replace his file with a hardlink to a different one, particularly if they differ in the metadata? Directories on a volume are often controlled by different people with different policies and proprietary interest to the files -- maybe the company "owns" everything, but it still can create a lot of internal disputes if files in a division or individual project's library folder starting getting their metadata changed, on account of being replaced with a hard link to a "better" file in the central repository. We can agree not to de-dup these, but it's more rules and exceptions that have to be made.

                Once you have to list of duplicates, and maybe the rules, do you just go and delete, or do you give the user a big list to review? And, if upon review, he makes one change to one duplicate instance, it'd be nice to have that change intelligently reflected on the others. The rules have to be applied to the dupe list interactively and changes have to be reflected in the same way, otherwise it becomes a miserable experience for the user to de-dupe 1M files over 7 terabytes. The resolution of duplicates is the hard part, the finding of dupes is relatively easy.

        • by Zeroko ( 880939 )
          The relevant number when worrying about non-adversarial hash collisions is the square root of the number of outputs (assuming they are close enough to uniformly distributed), due to the birthday paradox. So in the case of CRC32, more than ~2^16 files makes a collision likely (well, 2^16 gives about 39%), & with 2^22, the probability is nearly indistinguishable from 1 (it being over 99.9% for only 2^18 files).
    • Re:CRC (Score:5, Insightful)

      by igb ( 28052 ) on Sunday September 02, 2012 @08:52AM (#41205291)
      That involves reading every byte. It would be faster to read the bytecount of each file, which doesn't involve reading the files themselves as that metadata is available, and then exclude from further examination all the files which have unique sizes. You could then read the first block of each large file, and discard all the files that have unique first blocks. After that, CRC32 (or MD5 or SHA1 --- you're going to be disk-bound anyway) and look for duplicates that way.
      • Sounds ideal. Wouldn't take long to code, nor execute.

      • divide and conquer.

        your idea of using file size as first discriminant is good. its fast and throws out a lot of things that don't need to be checked.

        another accelrant is to find if the count of the # of files in a folder is the same. and if a few are the same, maybe the rest are. use 'info' like that to make it run faster.

        I have this problem and am going to write some code to do this, too.

        but I might have some files are are 'close' to the others and so I need smarter code. example: some music files mig

    • Re:CRC (Score:5, Informative)

      by Zocalo ( 252965 ) on Sunday September 02, 2012 @08:58AM (#41205327) Homepage
      No. No. No. Blindly CRCing every file is probably what took so long on the first pass and is a terribly inefficient way of de-duplicating files.

      There is absolutely no point in generating CRCs of files unless they match on some other, simpler to compare characteristic like file size. The trick is to break the problem apart into smaller chunks. Start with the very large files, they exact size break to use it'll depend on the data set, but as the poster mentioned video file say everything over 1GB to start. Chances are you can fully de-dupe your very large files manually based on nothing more than a visual inspection of names and file sizes in little more time than it takes to find them all in the first place. You can then exclude those files from further checks, and more importantly, from CRC generation.

      After that, try and break the problem down into smaller chunks. Whether you are sorting on size, name or CRC, it's quicker to do so when you only have a few hundred thousand files rather than several million. Maybe do another size constrained search; 512MB-1GB, say. Or if you have them, look for duplicated backups files in the form of ZIP files, or whatever archive format(s), you are using based on their extension - that also saves you having to expand and examine the contents of multiple archive files. Similarly, do a de-dupe of just the video files by extensions as these should again lend themselves to rapid manual sorting without having to generate CRCs for many GB of data. Another grouping to consider might be to at least try and get all of the website data, or as much of is as you can, into one place and de-dupe that, and consider whether you really need multiple archival copies of a site, or whether just the latest/final revision will do.

      By the time you've done all that, including moving the stuff that you know is unique out of the way and into a better filing structure as you go, the remainder should be much more manageable for a single final pass. Scan the lot, identify duplicates based on something simple like the file size and, ideally, manually get your de-dupe tool to CRC only those groups of identically sized files that you can't easily tell apart like bunches of identically sized word processor or image files with cryptic file names.
    • It's possible the free de-dup program was trying to do that.
      Best case scenarios would put your hash time at 1.5~6 hours (100 MB/s to 25 MB/s) for 4.9 TB

      But millions of small files are the absolute worst case scenario.
      God help you if there's any defragmentation.

    • Re:CRC (Score:5, Informative)

      by Anonymous Coward on Sunday September 02, 2012 @09:05AM (#41205393)

      If you get a linux image running (say in a livecd or VM) that can access the file system then fdupes is built to do this already. Various output format/recursion options.

      From the man page:
      DESCRIPTION
                    Searches the given path for duplicate files. Such files are found by
                    comparing file sizes and MD5 signatures, followed by a byte-by-byte
                    comparison.

    • This is a very fun programming task!

      Since it will be totally limited by disk IO, the language you choose doesn't really matter, as long as you make sure that you never read each file more than once:

      1) Recursive scan of all disks/directories, saving just file name and size plus a pointer to the directory you found it in.
      If you have multiple physical disks you can run this in parallel, one task/thread for each disk.

      2) Sort the list by file size.

      3) For each file size with multiple entries

  • by smash ( 1351 ) on Sunday September 02, 2012 @08:37AM (#41205163) Homepage Journal
    as per subject.
    • Re:ZFS (Score:5, Informative)

      by smash ( 1351 ) on Sunday September 02, 2012 @08:39AM (#41205189) Homepage Journal
      To clarify - no this will not remove duplicate references to the data. The files ystem will remain in tact. However it will perform block level dedupe of the data which will recover your space. Duplicate references aren't necessarily a bad thing anyway, as if you have any sort of content index (memory, code, etc) that refers to data in a particular location, it will continue to work. However the space will be recovered.
      • Could you then use something clever in ZFS to identify files that reference shared data?

  • Scan all simple file details (name, size, date, path) into a simple database. Sort on size, remove unique sized files. Decide on your criteria for identifying duplicates, whether it's by name or CRC, and then proceed to identify the dupes. Keep logs and stats.
  • If you can get them on a single filesystem (drive/partition), check out Duplicate and Same Files Searcher ( http://malich.ru/duplicate_searcher.aspx [malich.ru] ) which will replace duplicates with hardlinks. I link to that and a few others (some specific to locating similar images) on my freeware site; http://missingbytes.net/ [missingbytes.net] Good luck.
  • by Anonymous Coward on Sunday September 02, 2012 @08:41AM (#41205209)

    If you don't mind booting Linux (a live version will do), fdupes [wikipedia.org] has been fast enough for my needs and has various options to help you when multiple collisions occur. For finding similar images with non-identical checksums, findimagedupes [jhnc.org] will work, although it's obviously much slower than a straight 1-to-1 checksum comparison.

    YMMV

  • Use something like find to generate a rough "map" of where duplications are and then pull out duplicates from that. You can then work your way back up, merging as you go.

    I've found that deja-dup works pretty well for this, but since it takes an md5sum of each file it can be slow on extremely large directory trees.

  • by Anonymous Coward on Sunday September 02, 2012 @08:42AM (#41205217)

    Delete all files but one. The remaining file is guaranteed unique!

    • Delete all files but one. The remaining file is guaranteed unique!

      Preparing to delete all files. Press any key to continue.

  • by Fuzzums ( 250400 ) on Sunday September 02, 2012 @08:43AM (#41205219) Homepage

    if you really want, sort, order and index it all, but my suggestion would be different.

    If you didn't need the files in the last 5 years, you'll probably never need them at all.
    Maybe one or two. Make one volume called OldSh1t, index it, and forget about it again.

    Really. Unless you have a very good reason to un-dupe everything, don't.

    I have my share of old files and dupes. I know what you're talking about :)
    Well, the sun is shining. If you need me, I'm outside.

    • by equex ( 747231 ) on Sunday September 02, 2012 @09:34AM (#41205571) Homepage
      I probably have 5-10 gigs of everything i ever did on a computer. all this is wrapped in a perpetual folder structure of older backups within old backups within.... i've tried sorting it and deduping it with various tools, but theres no point. you find this snippet named clever_code_2002.c at 10kb and then the same file somewhere else at 11kb and how do you know which one to keep? are you going to inspect every file ? are you going to auto-dedupe it based on size? on date? it wont work out in the end im afraid. the closest i have gotten to some structure in the madness is to put all single files of the same type in the same folder, and keep a folder with stuff that needs to be in folders. put a folder named 'unsorted' anywhere you want when you are not sure right away what to do with a file(s). copy all your stuff into the folders. decide if you want to rename dupes to file_that_exists(1).jpg or leave them in their original folders and sort it out later in the file copy/move dialogs that pops up when it detects similar folders/files. i like to just rename them, and then whenever i browse a particular 'ancient' folder, i quickly sort trough some files every time. over time, it becomes tidier and tidier. one tool that everyone should use is Locate32. it indexes your preferred locations and stores it in a database when you want to. (its not a service) you can then search very much like the old Windows search function again, only much much better.
  • by jwales ( 97533 ) on Sunday September 02, 2012 @08:43AM (#41205221) Homepage

    Since the objective is to recover disk space, the smallest couple of million files are unlikely to do very much for you at all. It's the big files that are the issue in most situations.

    Compile a list of all your files, sorted by size. The ones that are the same size and the same name are probably the same file. If you're paranoid about duplicate file names and sizes (entirely plausible in some situations), then crc32 or byte-wise comparison can be done for reasonable or absolute certainty. Presumably at that point, to maintain integrity of any links to these files, you'll want to replace the files with hard links (not soft links!) so that you can later manually delete any of the "copies" without hurting all the other "copies". (There won't be separate copies, just hard links to one copy.)

    If you give up after a week, or even a day, at least you will have made progress on the most important stuff.

    • Remember the good old days when a 10 byte text file would take up a 2KB block on your hard drive?
      Well now hard drives use a 4KB block size.

      Web site backups = millions of small files = the worst case scenario for space

      • by b4dc0d3r ( 1268512 ) on Sunday September 02, 2012 @10:49AM (#41205981)

        ZIP, test, then Par2 the zip. Even at the worst possible compression level, greater than 100% filezises, you just saved a ton of space.

        I got to the point where I rarely copy small files without first zipping on the source drive. It takes so frigging long, when a full zip or tarball takes seconds. Even a flat tar without the gzip step is a vast improvement, since the filesystem doesn't have to be continually updated. But zipping takes so little resource that Windows XP's "zipped folders" actually makes a lot of sense for any computer after maybe 2004, even with the poor implementation.

  • by thePowerOfGrayskull ( 905905 ) <marc,paradise&gmail,com> on Sunday September 02, 2012 @08:47AM (#41205243) Homepage Journal

    perhaps you could boot with a livecd and mount your windows drives under a single directory? Then:

    find /your/mount/point -type f -exec sha256sum > sums.out
    uniq -u -w 64 sums.out

  • by Joe_Dragon ( 2206452 ) on Sunday September 02, 2012 @08:47AM (#41205247)

    put the disk on the build in sata bus or use E-sata or even fire wire.

  • If nuking it isn't an option, it's valuable to you. There are programs that can delete duplicates, but if you want some tolerance to changes in file-name and age, they can get hard to trust. But with the price of drives these days, is it worth your time de-duping them?

    First, copy everything to a NAS with new drives in it in RAID5. Store the old drives someplace safe (they may stop working if left off for too long, but its better if something does go wrong with the NAS to have them right?).

    Then, copy ever
  • You don't say what your desired outcome is.

    If this was my data I would proceed as this:

    • Data chunks (like web site backups) you want to keep together: weed out / move to their new permanent destination
    • Create a file database with CRC data (see comment by Spazmania)
    • Write a script to eliminate duplicate data using the file database. I would go through the files I have in the new system and delete their duplicates elsewhere.
    • Manually clean up / move to new destination for all remaining files.

    There will

  • The problem with a lot of file duplication tools is that they only consider files individually and not their location or the type of file. Often we have a lot of rules about what we'd like to keep and delete - such as keeping an mp3 in an album folder but deleting the one from the 'random mp3s' folder, or always keeping duplicate DLL files to avoid breaking backups of certain programs.

    With a large and varied enough collection of files it would take more time to automate that than you would want to spend. Th

  • I was just looking at this for a much smaller pile of data (aroudn 300GB) and came across this http://ldiracdelta.blogspot.com/2012/01/detect-duplicate-files-in-linux-or.html [blogspot.com]

  • by v1 ( 525388 ) on Sunday September 02, 2012 @09:19AM (#41205477) Homepage Journal

    I had to do that with an itunes library recently. Nowhere near the number of items you're working with, but same principle - watch your O's. (that's the first time I've had to deal with a 58mb XML file!) After the initial run forecasting 48 hrs and not being highly reliable, I dug in and optimized. A few hours later I had a program that would run in 48 seconds. When you're dealing with data sets of that size, process optimizing really can matter that much. (if it's taking too long, you're almost certainly doing it wrong)

    The library I had to work with had an issue with songs being in the library multiple times, under different names, and that ended up meaning there was NOTHING unique about the songs short of the checksums. To make matters WORSE, I was doing this offline. (I did not have access to the music files which were on the customer's hard drives, all seven of them)

    It sounds like you are also dealing with differing filenames. I was able to figure out a unique hashing system based on the metadata I had in the library file. If you can't do that, and I suspect you don't have any similar information to work with, you will need to do some thinking. Checksumming all the files is probably unnecessarily wasteful. Files that aren't the same size don't need to be checksummed. You may decide to consider files with the same size AND same creation and/or modification dates to be identical. That will reduce the number of files you need to checksum by several orders. A file key may be "filesize:checksum", where unique filesizes just have a 0 for the checksum.

    Write your program in two separate phases. First phase is to gather checksums where needed. Make sure the program is resumable. It may take awhile. It should store a table somehow that can be read by the 2nd program. The table should include full pathname and checksum. For files that did not require checksumming, simply leave it zero.

    Phase 2 should load the table, and create a collection from it. Use a language that supports it natively. (realbasic does, and is very fast and mac/win/lin targetable) For each item, do a collection lookup. Collections store a single arbitrary object (pathname) via a key. (checksum) If the collection (key) doesn't exist, it will create a new collection entry with that as its only object. if it already exists, the object is appended to the array for that collection. That's the actual deduping process, and will be done in a few seconds. Dictionaries and collections kick ass for deduping.

    From here you'll have to decide what you want to do.... delete, move, whatever. Duplicate songs required consolidation of playlists when removing dups for example. Simply walk the collection, looking for items with more than one object in the collection. Decide what to keep and what to do elsewise with (delete?) I recommend dry-running it and looking at what it's going to do before letting it start blowing things away.

    It will take 30-60 min to code probably. The checksum part may take awhile to run. Assuming you don't have a ton of files that are the same size (database chunks, etc) the checksumming shouldn't be too bad. The actual processing afterward will be relatively instantaneous. Use whatever checksumming method you can find that works fastest.

    The checksumming part can be further optimized by doing it in two phases, depending on file sizes. If you have a lot of files that are large-ish (>20mb) that will be the same size, try checksumming in two steps. Checksum the first 1mb of the file. If they differ, ok, they're different. If they're the same, ok then checksum the entire file. I don't know what your data set is like so this may or may not speed things up for you.

  • by williamyf ( 227051 ) on Sunday September 02, 2012 @09:20AM (#41205485)

    After you have found the "equal files", you need to decide which one to erase and which ones to keep. For example, let's say that a gif file is part of a web site and is also present in a few other places because you backed it up to removable media which latter got consolidated. If you chose to erase the copy that is part of the website structure, the website will stop working.

    Lucky for you, most filesystem implemenations nowadays include the capacity to create symbolic links (in windows, that would be NTFS Symbolic links since vista, and junction points since Win2K, in *nix is the soft hand hard symlinks we know and love, and in mac, the engineers added hard links to whole directories), both hard and soft. So, the solution must not only identify which files are the same, but also, keep one copy, while preserving accesability, this is what makes apple (r)(c)(tm) work so well. You will need a script that, upon identifying equal files, erases all but one, and creates symlinks for ll the erased ones to the surviving one.

  • I'm going through this same thing. New master PC, and trying to consolidate 8 zillion files and copies of files from the last decade or so.
    If you're like me, you copied foldres or trees, instead of individual files. FreeFileSync will show you which files are different between two folders.

    Grab two folders you think are pretty close. Compare. Then Sync. This copies dissimilar files in both directions. Now you have two identical folders/files. Delete one of the folders. Wash, rinse, repeat.
    Time consuming, but
  • by Qbertino ( 265505 ) <moiraNO@SPAMmodparlor.com> on Sunday September 02, 2012 @09:24AM (#41205517)

    Your problem isn't unduping files in your archives, your problem is getting an overview of your data archives. If you'd have it, you wouldn't have dupes in the first place.

    This is a larger personal project, but you should take it on, since it will be a good lesson in data organisation. I've been there and done that.

    You should get a rough overview of what you're looking at and where to expect large sets of dupes. Do this by manually parsing your archives in broad strokes. If you want to automate dupe-removal, do so by de-duping smaller chunks of your archive. You will need extra CPU and storage - maybe borrow a box or two from friends and set up a batch of scripts you can run from Linux live CDs with external HDDs attached.

    Most likely you will have to do some scripting or programming, and you will have to devise a strategy not only of dupe removal, but of merging the remaining skeletons of dirtrees. That's actually the tough part. Removing dupes takes raw processing power and can be done in a few weeks and brute force and a solid storage bandwidth.

    Organising the remaining stuff is where the real fun begins. ... You should start thinking about what you are willing to invest and how your backup, versioning and archiving strategy should look in the end, data/backup/archive retrival included. The latter might even determine how you go about doing your dirtree diffs - maybe you want to use a database for that for later use.

    Anyway you put it, just setting up a box in the corner and having a piece of software churn away for a few days, weeks or months won't solve your problem in the end. If you plan well, it will get you started, but that's the most you can expect.

    As I say: Been there, done that.
    I still have unfinished business in my backup/archiving strategy and setup, but the setup now is 2 1TB external USB3 drives and manual arsync sessions every 10 weeks or so to copy from HDD-1 to HDD-2 to have dual backups/archives. It's quite simple now, but it was a long hard way to clean up the mess of the last 10 years. And I actually was quite conservative about keeping my boxed tidy. I'm still missing external storage in my setup, aka Cloud-Storage, the 2012 buzzword for that, but it will be much easyer for me to extend to that, now that I've cleaned up my shit halfway.

    Good luck, get started now, work in iterations, and don't be silly and expect this project to be over in less than half a year.

    My 2 cents.

  • Delete the dupes, but be sure to make copies first.

  • by Terrasque ( 796014 ) on Sunday September 02, 2012 @10:01AM (#41205721) Homepage Journal

    I found a python script online and hacked it a bit to work on a larger scale.

    The script originally scanned a directory, found files with same size, and md5'ed them for comparison.

    Among other things I added option to ignore files under a certain size, and to cache md5 in a sqlite db. I also think I did some changes to the script to handle large number of files better, and do more effective md5 (also added option to limit number of bytes to md5, but that didn't make much difference in performance for some reason). I also added option to hard link files that are the same.

    With inodes in memory, and sqlite db already built, it takes about 1 second to "scan" 6TB of data. First scan will probably take a while, tho.

    Script here [dropbox.com] - It's only tested on Linux.

    Even if it's not perfect, it might be a good starting point :)

  • If You're Like Me (Score:3, Interesting)

    by crackspackle ( 759472 ) on Sunday September 02, 2012 @10:08AM (#41205751)

    The problem started with a complete lack of discipline. I had numerous systems over the years and never really thought I needed to bother with any tracking or control system to manage my home data. I kept way to many minor revisions of the same file, often forking them over different systems. As time past and rebuilt systems, I could no longer remember where all the critical stuff was so I'd create tar or zip archives over huge swaths of the file system just in case. I eventually decided to clean up like you are now when I had over 11 million files. I am down to less than half a million now. While I know there are still effective duplicates, at least the size is what I consider manageable. For the stuff from my past, I think this is all I can hope for; however, I've now learned the importance of organization, documentation and version control so I don't have this problem again in the future.

    Before even starting to de-duplicate, I recommend organizing your files in a consistent folder structure. Download wikimedia and start a wiki documenting what you're doing with your systems. The more notes you make, the easier it will be to reconstruct work you've done as time passes. Do this for your other day to day work as well. Get git and start using it for all your code and scripts. Let git manage the history and set it up to automatically duplicate changes on at least one other backup system. Use rsync to do likewise on your new directory structure. Force yourself to stop making any change you consider worth keeping outside of these areas. If you take these steps, you'll likely not have this problem again, at least on the same scope. You'll also find it a heck of a lot easier to decommission or rebuild home systems and you won't have to worry about "saving" data if one of them craps out.

    • If you need MediaWiki to manage the documentation about your filesystem structure, you really have a problem.
      TiddlyWiki [tiddlywiki.com] should be more than sufficient for that task.

  • It does the job for me, the selection assistant is quite powerful.
    http://www.digitalvolcano.co.uk/content/duplicate-cleaner [digitalvolcano.co.uk]
    Fast, but the old version (2.0) was better and freeware if you can still find a copy of it.

  • I have too many, due to simply being a messy pig and pedantic with files.
    The best tool I've found is called Duplicate Cleaner - it's from Digital Volcano.
    I do not work for / am not affiliated with these people.

    I've used many tools over the years, DFL, Duplic8 and "Duplicate Files Finder" - one of which had a shitty bug which matched non identical files.

    Duplicate cleaners algorithm is good and the UI, while not perfect, is one of the better ones at presenting the data. Especially identifying entire branch

  • by TheLink ( 130905 ) on Sunday September 02, 2012 @10:18AM (#41205803) Journal
    It's only 5TB. Why dedupe? Just buy another HDD or two. How much is your time worth anyway?

    You say the data is important enough that you don't want to nuke it. Wouldn't it be also true to say that the data that you've taken the trouble to copy more than once is likely to be important? So keep those dupes.

    To me not being able to find stuff (including being aware of stuff in the first place) would be a bigger problem :). That would be my priority, not eliminating dupes.
  • As many others have stated, use a tool that computes a hash of file contents. Coincidentally, I wrote one last week to do exactly this [blogspot.ca] when I was organizing my music folder. It'll interactively prompt you for which file to keep among the duplicates once it's finished scanning. It churns through about 30 GB of data in roughly 5 minutes. Not sure if it will scale to 4.2 million files, but it's worth a try!

  • Use DROID 6 (Score:5, Informative)

    by mattpalmer1086 ( 707360 ) on Sunday September 02, 2012 @10:21AM (#41205829)

    There is a digital preservation tool called DROID (Digital Record Object Identification) which scans all the files you ask it to, identifying their file type. It can also optionally generate an MD5 hash of each file it scans. It's available for download from sourceforge (BSD license, requires Java 6, update 10 or higher).

    http://sourceforge.net/projects/droid/ [sourceforge.net]

    It has a fairly nice GUI (for Java, anyway!), and a command line if you prefer scripting your scan. Once you have scanned all your files (with MD5 hash), export the results into a CSV file. If you like, you can first also define filters to exclude files you're not interested in (e.g. small files could be filtered out). Then import the CSV file into your data anlaysis app or database of your choice, and look for duplicate MD5 hashes. Alternetively, DROID actually stores its results in an Apache Derby database, so you could just connect directly to that rather than export to CSV, if you have a tool that an work with Derby.

    One of the nice things about DROID when working over large datasets is you can save the progress at any time, and resume scanning later on. It was built to scan very large government datastores (multiple Tb). It has been tested over several million files (this can take a week or two to process, but as I say, you can pause at any time, save or restore, although only from the GUI, not the command line).

    Disclaimer: I was responsible for the DROID 4, 5 and 6 projects while working at the UK National Archives. They are about to release an update to it (6.1 I think), but it's not available just yet.

  • So your de-dupe ran for a week before you cut it out? On a modern CPU, the de-dupe is limited not by the CPU speed (since deduplication basically just checksums blocks of storage), but by the speed of the drives.

    What you need to do is put all this data onto a single RAID10 array with high IO performance. 5TB of data, plus room to grow on a RAID10 with decent IOPS would probably be something like 6 3TB SATA drives on a new array controller. Set up the array with a large stripe size to prioritize reads (write

  • At a superficial level, the issue would seem to be quite hard, but with a little planning it shouldn't be *that* hard.

    My path would be to go out and build a new file server running either Windows Server or Linux, based on what OS your current file server uses, install the de-dupe tool of your choice from the many listed above, and migrate your entire file structure from your current box to the the new box - the de-dupe tools will work their magic as the files trip in over the network connection. Once de-dup

  • This gives an sha256sum list of all files assuming you are in linux and writing it to list.sha256 in the base of your home folder:

    find /<folder_containing_data> -type f -print0 | xargs -0 sha256sum > ~/list.sha256

    You may replace sha256sum with another checksum routine if you want, such as. sha512sum, md5sum, sha1sum, or other preference.

    now sort the file:

    sort ~/list.sha256 > ~/list.sha256.sorted

    (notice, this create a sorted list according to the sha256 value but with the path to the file as

  • by 3seas ( 184403 ) on Sunday September 02, 2012 @10:44AM (#41205961) Homepage Journal

    ...it will have cost you far more than simply buying another drive(s) if all you are really concerned about is space...

  • My home-rolled solution to exactly this problem is: http://gnosis.cx/bin/find-duplicate-contents [gnosis.cx].

    This script is efficient algorithmically and has a variety of options to work incrementally and to optimize common cases. It's not excessively user-friendly, possibly, but the --help screen gives reasonable guidance. And the whole thing is short and readable Python code (which doesn't matter for speed, since the expensive steps like MD5 are callouts to fast C code in the standard library).

  • by fulldecent ( 598482 ) on Monday September 03, 2012 @12:36AM (#41210887) Homepage

    Best tool. http://hungrycats.org/~zblaxell/dupemerge/faster-dupemerge [hungrycats.org] worked great for me in the past 10 years. Scales.

In the long run, every program becomes rococco, and then rubble. -- Alan Perlis

Working...