Ask Slashdot: How Do I De-Dupe a System With 4.2 Million Files? 440
First time accepted submitter jamiedolan writes "I've managed to consolidate most of my old data from the last decade onto drives attached to my main Windows 7 PC. Lots of files of all types from digital photos & scans to HD video files (also web site backup's mixed in which are the cause of such a high number of files). In more recent times I've organized files in a reasonable folder system and have an active / automated backup system. The problem is that I know that I have many old files that have been duplicated multiple times across my drives (many from doing quick backups of important data to an external drive that later got consolidate onto a single larger drive), chewing up space. I tried running a free de-dup program, but it ran for a week straight and was still 'processing' when I finally gave up on it. I have a fast system, i7 2.8Ghz with 16GB of ram, but currently have 4.9TB of data with a total of 4.2 million files. Manual sorting is out of the question due to the number of files and my old sloppy filing (folder) system. I do need to keep the data, nuking it is not a viable option.
CRC (Score:5, Informative)
Do a CRC32 of each file. Write to a file one per line in this order: CRC, directory, filename. Sort the file by CRC. Read the file linearly doing a full compare on any file with the same CRC (these will be adjacent in the file).
Re:CRC (Score:5, Informative)
s/CRC32/sha1 or md5, you won't be CPU bound anyway.
Re:CRC (Score:5, Informative)
Re:CRC (Score:5, Informative)
1. Compare filesizes.
2. When there are multiple files with the same size, start diffing them. I don't read the whole file to compute a checksum - that's inefficient with large files. I simply read the two files byte by byte, and compare - that way, I can quit checking as soon as I hit the first different byte.
Source is at https://github.com/caluml/finddups [github.com] - it needs some tidying up, but it works pretty well.
git clone, and then mvn clean install.
Re:CRC (Score:5, Insightful)
Part 2 of your method will quickly bog down if you run into many files that are the same size. Takes (n choose 2) comparisons, for a problem that can be done in n time. If you have 100 files all of one size, you'll have to do 4950 comparisons. Much faster to compute and sort 100 checksums.
Also, you don't have to read the whole file to make use of checksums, CRCs, hashes and the like. Just check a few pieces likely to be different if the files are different, such as the first and last 2000 bytes. Then for those files with matching parts, check the full files.
Re:CRC (Score:4, Insightful)
Anyway... (Score:2)
Only if you have 100 unique files (Score:5, Informative)
If you have 100 files all of one size, you'll have to do 4950 comparisons.
You only have to do 4950 comparisons if you have 100 unique files.
What I do is pop the first file from the list, to use as a standard, and compare all the files with it, block by block. If a block fails to match, I give up on that file matching the standard. The files that don't match generally don't go very far, and don't take much time. For the ones that match, I would have taken all that time if I was using a hash method anyway. As for reading the standard file multiple times: It goes fast because it's in cache.
The ones that match get taken from the list. Obviously I don't compare the one which match with each other. That would be stupid.
Then I go back to the list and rinse/repeat until there are less than 2 files.
I have done this many times with a set of 3 million files which take up about 600GB.
Re: (Score:3)
Re: (Score:2)
Added benefit, when sorting by filesize you can hit the biggest ones first. Depending on your dataset, most of your redundant data might be in just a few duplicated files.
Re:CRC (Score:4, Informative)
For the lazy, here are 3 more tools:
fdupes [caribe.net], duff [sourceforge.net], and rdfind [pauldreik.se].
Duff claims it's O(n log n), because they:
Only compare files if they're of equal size.
Compare the beginning of files before calculating digests.
Only calculate digests if the beginning matches.
Compare digests instead of file contents.
Only compare contents if explicitly asked.
Re: (Score:3, Informative)
Unique Filer http://www.uniquefiler.com/ [uniquefiler.com] implements these short-circuits for you.
It's meant for images but will handle any filetype, and even runs under WINE.
Re:CRC (Score:4, Funny)
I looked at this as I, like the subby, have terabytes of porn to sort.
But $19.95 for a beta?
Re:CRC (Score:4, Insightful)
$19.95 for a beta of something you can whip up in about an hour of shell scripting.
Hell, I wrote exactly what people are talking about here in an afternoon in college - I even did both SHA and MD5, because I ended up finding a SHA collision between one of the Quake 3 files and a Linux system file.
Re:CRC (Score:5, Insightful)
$19.95 for a beta of something you can whip up in an hour of shell scripting.
If the poster were you, they wouldn't have had to 'ask slashdot'.
Just hash first 4K of each file, avoid 2nd pass (Score:2, Insightful)
Only hash the first 4K of each file and just do them all. The size check will save a hash only for files with unique sizes, and I think there won't be many with 4.2M media files averaging ~1MB. The second near-full directory scan won't be all that cheap.
Re:CRC (Score:4, Insightful)
s/CRC32/sha1 or md5, you won't be CPU bound anyway.
Whatever you use it's going to be SLOW on 5TB of data. You can probably eliminate 90% of the work just by:
a) Looking at file sizes, then
b) Looking at the first few bytes of files with the same size.
After THAT you can start with the checksums.
Re: (Score:3)
Indeed, I once created a dedup script which basically did that.
1. compare the file sizes
2. compare the first 1MB of the file
3. compare the last 1MB of the file
4. compare the middle 1MB in the file
It's not a 100% foolproof solution but it was more than enough for my use case at that time and much faster than getting checksums.
Re:CRC (Score:5, Informative)
b) Looking at the first few bytes of files with the same size.
Note that there's no reason to only look at the first few bytes. On spinning disks, any read smaller than about 16K will take the same amount of time. Comparing two 16K chunks takes zero time compared to how long it takes to read them from disk.
You could, for that matter, make it a 3-pass system that's pretty fast:
a) get all file sizes; remove all files that have unique sizes
b) compute the MD5 hash of the first 16K of each file; remove all files that have unique (size, header-hash) pairs
c) compute the MD5 hash of the whole file; remove all files that have unique (size, hash) pairs
Now you have a list of duplicates.
Don't forget to eliminate all files of zero length in step (a). They're trivially duplicates but shouldn't be deduplicated.
Re: (Score:2)
This is similar to what git and ZFS do (but with a better hash, some kind of sha I think).
Re: (Score:3)
Exactly.
1. Install MySQL,
2. create a table (CRC, directory, filename, filesize)
3. fill it in
4. play with inner joins.
I'd even go down the path of forgetting about the CRC. Before deleting something, do a manual check anyways. CRC has the advantage of making things very straightforward but is a bit more complex to generate.
Re: (Score:2)
Use SHA-1 instead of CRC.
Re: (Score:3, Interesting)
Re: (Score:3)
I don't know where you are finding these numbers, but they are about as wrong as it is possible to get.
There is no known SHA-1 collision yet in the entire world. You're not going to find 500 of them in your dump of old files.
Re:CRC (Score:5, Informative)
That's totally, completely wrong. The birthday problem isn't a breakthrough concept, and the probability of random SHA-1 collisions is therefore calculated with it in mind. The number is known to be 1/2^80. This is straightforwardly derived from the total number of SHA-1 values, 2^160, which is then immensely reduced by the birthday paradox to 2^80 expected hashes required for a collision. This means that a hard drive with 2^80 or 1,208,925,819,614,629,174,706,176 files would have on average ONE collision. Note that this is a different number than the number of hashes one has to generate for a targeted cryptographic SHA-1 attack, which with best current theory is on the order of 2^51 [wikipedia.org] for the full 80-round SHA-1, although as Goaway has pointed out, no such collision has yet been found.
Frankly I'm at a loss as to how you arrived at 500 SHA-1 collisions out of 4.2 million files. That's ludicrous. Any crypto hashing function with such a high collision rate would be useless. Much worse than MD5, even.
Re: (Score:3, Interesting)
4. play with inner joins.
Much like there's 50 ways to do anything in Perl, there's quite a few ways to do this in SQL.
select filename_and_backup_tape_number_and_stuff_like_that, count(*) as number_of_copies
from pile_of_junk_table
group by md5hash
having number_of_copies > 1
Theres another strategy where you mush two tables up against each other... one is basically the DISTINCT of the other.
triggers are widely complained about, but you can implement a trigger system (or psuedo-trigger, where you make a wrapper function in your app)
Re: (Score:2)
the CRC is not just a bit more complex to generate, it forces you to read the entire file. Reading 5 TB data takes quite a lot more time than reading a filesystem with 4M files. So yes, delay the CRC, play with filesizes first.
Re: (Score:2)
You can check a few files in a directory and then easily deduce the whole directory is a dupe. You don't have to do it file by file.
Plus, when the system finds a dupe, you need to tell it which copy it should delete, or else you risk having stuff all around and not knowing where it is. Some file you knew was in directory A/B/C/D is suddenly not there anymore and you have no clue where its "dupe" is located. Unless the dupe finder creates symlinks in place of the deleted file...
Re: (Score:3)
DO NOT do a CRC, do a hash. Too many chances of collision with a CRC.
But that still won't fix his real problem - he's got lots of data to process and only one system to process it with.
Re: (Score:2)
Did you read the bit about "doing a full compare on any file with the same CRC"?
The CRC is just for bringing likely files together. It will work fine.
Re:CRC (Score:4, Interesting)
I confess, if I had a modern i5 or i7 processor and appropriate software I'd be tempted to in fact calculate some sort of AES-based HMAC, as I would have hardware assist to do that.
Re: (Score:2)
4.2 million, not billion. About 2^22 files.
Re: (Score:2)
Fortunately, you actually only need about 2^16 files to get collisions on a 32-bit CRC.
Re:CRC (Score:5, Interesting)
This was theorized by one of the RSA guys (Rivest, if I'm not mistaken). I helped support a system that identified files by CRC32, as a lot of tools did back then. As soon as we got to about 65k files (2^16), we had two files with the same CRC32.
Let me say, CRC32 is a very good algorithm. So good, I'll tell you how good. It is 4 bytes long, which means in theory you can change any 4 bytes of a file and get a CRC32 collision, unless the algorithm distributes them randomly, in which case you will get more or less.
I naively tried to reverse engineer a file from a known CRC32. Optimized and recursive, on a 333 mHz computer, it took 10 minutes to generate the first collision. Then every 10 minutes or so. Every 4 bytes (last 4, last 5 with the original last byte, last 6 with original last 2 bytes, etc) there was a collision.
Compare file sises first, not CRC32. The s^16 estimate is not only mathematically proven, but also in the big boy world. I tried to move the community towards another hash.
CRC32 *and* filesize are a great combination. File size is not included in the 2^16 estimate. I have yet to find two files in the real world, in the same domain (essentially type of file), with the same size and CRC32.
Be smart, use the right tool for the job. First compare file size (ignoring things like mp3 ID3 tags, or other headers). Then do two hashes of the contents - CRC32 and either MD5 or SHA1 (again ignoring well-known headers if possible). Then out of the results, you can do a byte for byte comparison, or let a human decide.
This is solely to dissuade CRC32 based identification. After all, it was designed for error detection, not identification. For a 4-byte file, my experience says CCITT standard CRC32 will work for identification. For 5 byte files, you can have two bytes swapped and possibly have the same result. The longer the file, the less likely it is to be unique.
Be smart, use size and two or more hashes to identify files. And even then, verify the contents. But don't compute hashes on every file - the operating system tells you file size as you traverse the directories, so start there.
Re:CRC (Score:5, Insightful)
I once had to write an audio file de-deuplicator; one of the big problems was you would ignore the metadata and the out-of-band data when you did the comparisons, but you always had to take this stuff into account when you were deciding which version of a file to keep -- you didn't want to delete two copies f a file with all the tags filled out and keep the one that was naked.
My de-duper worked like everyone here is saying -- it cracked open wav and aiff (and Sound Designer 2) files, captured their sample count and sample format into a sqlite db, did a couple of big joins and then did some SHA1 hashes of likely suspects. All of this worked great, but once I had the list I had the epiphany that the real problem of these tools is the resolution and how you make sure you're doing exactly what the user wants.
How do you decide which one to keep? You can just do hard links, but...
But let's say you can do hard links, no problem. How do you decide which instance of the file is to be kept, if you've only compared the "real" content of the file and ignored metadata? You could just give the user a big honking list of every set of files that are duplicates -- two here, three here, six here, and then let them go through and elect which one will be kept, but that's a mess and 99% of the time they're going to select a keeper on the basis of which part of the directory tree it's in. So, you need to do a rule system or a preferential ranking of parts of the directory hierarchy that tell the system "keep files you find here." Now, the files will also have metadata, so you also have to preferentially rank the files on the basis of its presence -- you might also rank files higher if your guy did the metadata tagging, because things like audio descriptions are often done with a specialized jargon that can be specific to a particular house.
Also, it'd be very common to delete a file from a directory containing an editor's personal library, and replacing it with a hard link to a file in the company's main library -- several people would have copies of the same commercial sound, or an editor would be the recordist of a sound that was subsequently sold to a commercial library, or whatever. Is it a good policy to replace his file with a hardlink to a different one, particularly if they differ in the metadata? Directories on a volume are often controlled by different people with different policies and proprietary interest to the files -- maybe the company "owns" everything, but it still can create a lot of internal disputes if files in a division or individual project's library folder starting getting their metadata changed, on account of being replaced with a hard link to a "better" file in the central repository. We can agree not to de-dup these, but it's more rules and exceptions that have to be made.
Once you have to list of duplicates, and maybe the rules, do you just go and delete, or do you give the user a big list to review? And, if upon review, he makes one change to one duplicate instance, it'd be nice to have that change intelligently reflected on the others. The rules have to be applied to the dupe list interactively and changes have to be reflected in the same way, otherwise it becomes a miserable experience for the user to de-dupe 1M files over 7 terabytes. The resolution of duplicates is the hard part, the finding of dupes is relatively easy.
Re: (Score:2)
Re:CRC (Score:5, Insightful)
Re: (Score:2)
Sounds ideal. Wouldn't take long to code, nor execute.
Re: (Score:2)
You're not baffled.
Bert
Re: (Score:3)
divide and conquer.
your idea of using file size as first discriminant is good. its fast and throws out a lot of things that don't need to be checked.
another accelrant is to find if the count of the # of files in a folder is the same. and if a few are the same, maybe the rest are. use 'info' like that to make it run faster.
I have this problem and am going to write some code to do this, too.
but I might have some files are are 'close' to the others and so I need smarter code. example: some music files mig
Re:CRC (Score:5, Informative)
There is absolutely no point in generating CRCs of files unless they match on some other, simpler to compare characteristic like file size. The trick is to break the problem apart into smaller chunks. Start with the very large files, they exact size break to use it'll depend on the data set, but as the poster mentioned video file say everything over 1GB to start. Chances are you can fully de-dupe your very large files manually based on nothing more than a visual inspection of names and file sizes in little more time than it takes to find them all in the first place. You can then exclude those files from further checks, and more importantly, from CRC generation.
After that, try and break the problem down into smaller chunks. Whether you are sorting on size, name or CRC, it's quicker to do so when you only have a few hundred thousand files rather than several million. Maybe do another size constrained search; 512MB-1GB, say. Or if you have them, look for duplicated backups files in the form of ZIP files, or whatever archive format(s), you are using based on their extension - that also saves you having to expand and examine the contents of multiple archive files. Similarly, do a de-dupe of just the video files by extensions as these should again lend themselves to rapid manual sorting without having to generate CRCs for many GB of data. Another grouping to consider might be to at least try and get all of the website data, or as much of is as you can, into one place and de-dupe that, and consider whether you really need multiple archival copies of a site, or whether just the latest/final revision will do.
By the time you've done all that, including moving the stuff that you know is unique out of the way and into a better filing structure as you go, the remainder should be much more manageable for a single final pass. Scan the lot, identify duplicates based on something simple like the file size and, ideally, manually get your de-dupe tool to CRC only those groups of identically sized files that you can't easily tell apart like bunches of identically sized word processor or image files with cryptic file names.
Re: (Score:3)
Re: (Score:2)
It's possible the free de-dup program was trying to do that.
Best case scenarios would put your hash time at 1.5~6 hours (100 MB/s to 25 MB/s) for 4.9 TB
But millions of small files are the absolute worst case scenario.
God help you if there's any defragmentation.
Re:CRC (Score:5, Informative)
If you get a linux image running (say in a livecd or VM) that can access the file system then fdupes is built to do this already. Various output format/recursion options.
From the man page:
DESCRIPTION
Searches the given path for duplicate files. Such files are found by
comparing file sizes and MD5 signatures, followed by a byte-by-byte
comparison.
File size then interleaved secure hash (Score:3)
This is a very fun programming task!
Since it will be totally limited by disk IO, the language you choose doesn't really matter, as long as you make sure that you never read each file more than once:
1) Recursive scan of all disks/directories, saving just file name and size plus a pointer to the directory you found it in.
If you have multiple physical disks you can run this in parallel, one task/thread for each disk.
2) Sort the list by file size.
3) For each file size with multiple entries
Re: (Score:3)
I have a script which does this for openstreetmap tiles. Once it identifies the dupes it archives all the tiles into a single file, pointing the dupes at a single copy in the archive. Then I use a Linux fuse filesystem to read the file and present the results to Apache. Saves a truly massive amount of disk space for an openstreetmap server since the files are mostly smaller than a single disk block and never consume enough disk blocks that the space lost to the inode and unused part of the last block is ins
Re: (Score:3, Funny)
Do a CRC32 of each file. Write to a file one per line in this order: CRC, directory, filename. Sort the file by CRC. Read the file linearly doing a full compare on any file with the same CRC (these will be adjacent in the file).
Would you be so kind to write a program/script which can do that ?
Payment information please, AC?
Re:CRC (Score:5, Insightful)
Someone who's technical expertise is in areas other than writing script files. There are technical jobs other than being a sysop you know.
Re: (Score:3)
Re: (Score:2)
Things get unnecessarily messy when you have to do them all in one line. However, if I were doing this as a one-time operation, I'd start with something like what you suggest, dumping the results into file1.
Then I'd cat the whole thing through awk '{ print $1 }' | uniq -d > file2 to get a list of all the hashes that are not unique (that way you can focus on the duplicates and not have to scan that huge file).
Then I'd grep the original file with grep -f file2 file1 > file3 to get the full output of th
Re: (Score:3)
I usually use:
find . -type f -exec md5sum {} \; > /tmp/files.md5.txt
you can check back with that file:
md5sum -c /tmp/files.md5.txt
Re: (Score:2)
Right and these are backs so its useful to have not just every unique file but their layout. If they were all in a folder together at one time, its useful to preserve that fact.
It sounds like the poster is somewhat organized, he was making backups in the first place. What he failed to do was manage versioning and generations. My inclination would be to copy the entire thing into some other file system that does block-level dedupe. Keep all the files, mapp them onto the same media underneath, where they
ZFS (Score:3)
Re:ZFS (Score:5, Informative)
Re: (Score:2)
Could you then use something clever in ZFS to identify files that reference shared data?
Re: (Score:3)
You have to enable it, which can be done on a per-filesystem basis. Once it's on, any new data written to that filesystem will be deduplicated. If you then turn it off, new data will not be deduplicated but data already on disk will remain deduplicated. (Unless it gets modified, of course. Then it's new data.)
PC-BSD installs onto ZFS by default if you have over 4GB or so of ram, but won't turn on deduplication automatically. Dedup is costly: it requires a dedup table which has 320 bytes per (variably siz
Simplify the list (Score:2)
Hardlinks? (Score:2)
There are tools for this (Score:5, Informative)
If you don't mind booting Linux (a live version will do), fdupes [wikipedia.org] has been fast enough for my needs and has various options to help you when multiple collisions occur. For finding similar images with non-identical checksums, findimagedupes [jhnc.org] will work, although it's obviously much slower than a straight 1-to-1 checksum comparison.
YMMV
Break it up into chunks (Score:2)
Use something like find to generate a rough "map" of where duplications are and then pull out duplicates from that. You can then work your way back up, merging as you go.
I've found that deja-dup works pretty well for this, but since it takes an md5sum of each file it can be slow on extremely large directory trees.
Simple dedupe algorithm (Score:5, Funny)
Delete all files but one. The remaining file is guaranteed unique!
Re: (Score:2)
Delete all files but one. The remaining file is guaranteed unique!
Preparing to delete all files. Press any key to continue.
Don't waste your time. (Score:5, Insightful)
if you really want, sort, order and index it all, but my suggestion would be different.
If you didn't need the files in the last 5 years, you'll probably never need them at all.
Maybe one or two. Make one volume called OldSh1t, index it, and forget about it again.
Really. Unless you have a very good reason to un-dupe everything, don't.
I have my share of old files and dupes. I know what you're talking about :)
Well, the sun is shining. If you need me, I'm outside.
Re:Don't waste your time. (Score:4, Interesting)
Prioritize by file size (Score:5, Insightful)
Since the objective is to recover disk space, the smallest couple of million files are unlikely to do very much for you at all. It's the big files that are the issue in most situations.
Compile a list of all your files, sorted by size. The ones that are the same size and the same name are probably the same file. If you're paranoid about duplicate file names and sizes (entirely plausible in some situations), then crc32 or byte-wise comparison can be done for reasonable or absolute certainty. Presumably at that point, to maintain integrity of any links to these files, you'll want to replace the files with hard links (not soft links!) so that you can later manually delete any of the "copies" without hurting all the other "copies". (There won't be separate copies, just hard links to one copy.)
If you give up after a week, or even a day, at least you will have made progress on the most important stuff.
Re: (Score:2)
Remember the good old days when a 10 byte text file would take up a 2KB block on your hard drive?
Well now hard drives use a 4KB block size.
Web site backups = millions of small files = the worst case scenario for space
Re:Prioritize by file size (Score:4, Informative)
ZIP, test, then Par2 the zip. Even at the worst possible compression level, greater than 100% filezises, you just saved a ton of space.
I got to the point where I rarely copy small files without first zipping on the source drive. It takes so frigging long, when a full zip or tarball takes seconds. Even a flat tar without the gzip step is a vast improvement, since the filesystem doesn't have to be continually updated. But zipping takes so little resource that Windows XP's "zipped folders" actually makes a lot of sense for any computer after maybe 2004, even with the poor implementation.
Linux livecd? (Score:4)
perhaps you could boot with a livecd and mount your windows drives under a single directory? Then:
find /your/mount/point -type f -exec sha256sum > sums.out
uniq -u -w 64 sums.out
Re: (Score:2)
Damn, just remembered that won't include the filename :) I'll reply with a fixed once I get back to my pc unless someone else beats me to it.
Re:Linux livecd? (Score:4, Insightful)
Re: (Score:2)
Fixed below:
find /exports -type f | xargs -d "\n" sha256sum > sums.out
uniq -d -w 64 sums.out
You could also do another pipe to run it in one line, but this way you have a list of files and checksums if you want them for anything else in the future.
don't run the app on a usb EXT disk (Score:3)
put the disk on the build in sata bus or use E-sata or even fire wire.
Don't worry about it (Score:2, Insightful)
First, copy everything to a NAS with new drives in it in RAID5. Store the old drives someplace safe (they may stop working if left off for too long, but its better if something does go wrong with the NAS to have them right?).
Then, copy ever
Desired outcome (Score:2)
You don't say what your desired outcome is.
If this was my data I would proceed as this:
There will
File Groupings (Score:2)
The problem with a lot of file duplication tools is that they only consider files individually and not their location or the type of file. Often we have a lot of rules about what we'd like to keep and delete - such as keeping an mp3 in an album folder but deleting the one from the 'random mp3s' folder, or always keeping duplicate DLL files to avoid breaking backups of certain programs.
With a large and varied enough collection of files it would take more time to automate that than you would want to spend. Th
linux/cygwin solution (Score:2)
I was just looking at this for a much smaller pile of data (aroudn 300GB) and came across this http://ldiracdelta.blogspot.com/2012/01/detect-duplicate-files-in-linux-or.html [blogspot.com]
fun project (Score:3)
I had to do that with an itunes library recently. Nowhere near the number of items you're working with, but same principle - watch your O's. (that's the first time I've had to deal with a 58mb XML file!) After the initial run forecasting 48 hrs and not being highly reliable, I dug in and optimized. A few hours later I had a program that would run in 48 seconds. When you're dealing with data sets of that size, process optimizing really can matter that much. (if it's taking too long, you're almost certainly doing it wrong)
The library I had to work with had an issue with songs being in the library multiple times, under different names, and that ended up meaning there was NOTHING unique about the songs short of the checksums. To make matters WORSE, I was doing this offline. (I did not have access to the music files which were on the customer's hard drives, all seven of them)
It sounds like you are also dealing with differing filenames. I was able to figure out a unique hashing system based on the metadata I had in the library file. If you can't do that, and I suspect you don't have any similar information to work with, you will need to do some thinking. Checksumming all the files is probably unnecessarily wasteful. Files that aren't the same size don't need to be checksummed. You may decide to consider files with the same size AND same creation and/or modification dates to be identical. That will reduce the number of files you need to checksum by several orders. A file key may be "filesize:checksum", where unique filesizes just have a 0 for the checksum.
Write your program in two separate phases. First phase is to gather checksums where needed. Make sure the program is resumable. It may take awhile. It should store a table somehow that can be read by the 2nd program. The table should include full pathname and checksum. For files that did not require checksumming, simply leave it zero.
Phase 2 should load the table, and create a collection from it. Use a language that supports it natively. (realbasic does, and is very fast and mac/win/lin targetable) For each item, do a collection lookup. Collections store a single arbitrary object (pathname) via a key. (checksum) If the collection (key) doesn't exist, it will create a new collection entry with that as its only object. if it already exists, the object is appended to the array for that collection. That's the actual deduping process, and will be done in a few seconds. Dictionaries and collections kick ass for deduping.
From here you'll have to decide what you want to do.... delete, move, whatever. Duplicate songs required consolidation of playlists when removing dups for example. Simply walk the collection, looking for items with more than one object in the collection. Decide what to keep and what to do elsewise with (delete?) I recommend dry-running it and looking at what it's going to do before letting it start blowing things away.
It will take 30-60 min to code probably. The checksum part may take awhile to run. Assuming you don't have a ton of files that are the same size (database chunks, etc) the checksumming shouldn't be too bad. The actual processing afterward will be relatively instantaneous. Use whatever checksumming method you can find that works fastest.
The checksumming part can be further optimized by doing it in two phases, depending on file sizes. If you have a lot of files that are large-ish (>20mb) that will be the same size, try checksumming in two steps. Checksum the first 1mb of the file. If they differ, ok, they're different. If they're the same, ok then checksum the entire file. I don't know what your data set is like so this may or may not speed things up for you.
CRCing & diff-ing do not a consistent deduping (Score:3)
After you have found the "equal files", you need to decide which one to erase and which ones to keep. For example, let's say that a gif file is part of a web site and is also present in a few other places because you backed it up to removable media which latter got consolidated. If you chose to erase the copy that is part of the website structure, the website will stop working.
Lucky for you, most filesystem implemenations nowadays include the capacity to create symbolic links (in windows, that would be NTFS Symbolic links since vista, and junction points since Win2K, in *nix is the soft hand hard symlinks we know and love, and in mac, the engineers added hard links to whole directories), both hard and soft. So, the solution must not only identify which files are the same, but also, keep one copy, while preserving accesability, this is what makes apple (r)(c)(tm) work so well. You will need a script that, upon identifying equal files, erases all but one, and creates symlinks for ll the erased ones to the surviving one.
FreeFileSync (Score:2)
If you're like me, you copied foldres or trees, instead of individual files. FreeFileSync will show you which files are different between two folders.
Grab two folders you think are pretty close. Compare. Then Sync. This copies dissimilar files in both directions. Now you have two identical folders/files. Delete one of the folders. Wash, rinse, repeat.
Time consuming, but
Manual work will have to be done (Score:5, Informative)
Your problem isn't unduping files in your archives, your problem is getting an overview of your data archives. If you'd have it, you wouldn't have dupes in the first place.
This is a larger personal project, but you should take it on, since it will be a good lesson in data organisation. I've been there and done that.
You should get a rough overview of what you're looking at and where to expect large sets of dupes. Do this by manually parsing your archives in broad strokes. If you want to automate dupe-removal, do so by de-duping smaller chunks of your archive. You will need extra CPU and storage - maybe borrow a box or two from friends and set up a batch of scripts you can run from Linux live CDs with external HDDs attached.
Most likely you will have to do some scripting or programming, and you will have to devise a strategy not only of dupe removal, but of merging the remaining skeletons of dirtrees. That's actually the tough part. Removing dupes takes raw processing power and can be done in a few weeks and brute force and a solid storage bandwidth.
Organising the remaining stuff is where the real fun begins. ... You should start thinking about what you are willing to invest and how your backup, versioning and archiving strategy should look in the end, data/backup/archive retrival included. The latter might even determine how you go about doing your dirtree diffs - maybe you want to use a database for that for later use.
Anyway you put it, just setting up a box in the corner and having a piece of software churn away for a few days, weeks or months won't solve your problem in the end. If you plan well, it will get you started, but that's the most you can expect.
As I say: Been there, done that.
I still have unfinished business in my backup/archiving strategy and setup, but the setup now is 2 1TB external USB3 drives and manual arsync sessions every 10 weeks or so to copy from HDD-1 to HDD-2 to have dual backups/archives. It's quite simple now, but it was a long hard way to clean up the mess of the last 10 years. And I actually was quite conservative about keeping my boxed tidy. I'm still missing external storage in my setup, aka Cloud-Storage, the 2012 buzzword for that, but it will be much easyer for me to extend to that, now that I've cleaned up my shit halfway.
Good luck, get started now, work in iterations, and don't be silly and expect this project to be over in less than half a year.
My 2 cents.
Use the Goldwyn algorithm (Score:2)
Delete the dupes, but be sure to make copies first.
Already done it - python script (Score:4, Informative)
I found a python script online and hacked it a bit to work on a larger scale.
The script originally scanned a directory, found files with same size, and md5'ed them for comparison.
Among other things I added option to ignore files under a certain size, and to cache md5 in a sqlite db. I also think I did some changes to the script to handle large number of files better, and do more effective md5 (also added option to limit number of bytes to md5, but that didn't make much difference in performance for some reason). I also added option to hard link files that are the same.
With inodes in memory, and sqlite db already built, it takes about 1 second to "scan" 6TB of data. First scan will probably take a while, tho.
Script here [dropbox.com] - It's only tested on Linux.
Even if it's not perfect, it might be a good starting point :)
If You're Like Me (Score:3, Interesting)
The problem started with a complete lack of discipline. I had numerous systems over the years and never really thought I needed to bother with any tracking or control system to manage my home data. I kept way to many minor revisions of the same file, often forking them over different systems. As time past and rebuilt systems, I could no longer remember where all the critical stuff was so I'd create tar or zip archives over huge swaths of the file system just in case. I eventually decided to clean up like you are now when I had over 11 million files. I am down to less than half a million now. While I know there are still effective duplicates, at least the size is what I consider manageable. For the stuff from my past, I think this is all I can hope for; however, I've now learned the importance of organization, documentation and version control so I don't have this problem again in the future.
Before even starting to de-duplicate, I recommend organizing your files in a consistent folder structure. Download wikimedia and start a wiki documenting what you're doing with your systems. The more notes you make, the easier it will be to reconstruct work you've done as time passes. Do this for your other day to day work as well. Get git and start using it for all your code and scripts. Let git manage the history and set it up to automatically duplicate changes on at least one other backup system. Use rsync to do likewise on your new directory structure. Force yourself to stop making any change you consider worth keeping outside of these areas. If you take these steps, you'll likely not have this problem again, at least on the same scope. You'll also find it a heck of a lot easier to decommission or rebuild home systems and you won't have to worry about "saving" data if one of them craps out.
Re: (Score:3)
If you need MediaWiki to manage the documentation about your filesystem structure, you really have a problem.
TiddlyWiki [tiddlywiki.com] should be more than sufficient for that task.
I use Duplicate Cleaner (Score:2)
It does the job for me, the selection assistant is quite powerful.
http://www.digitalvolcano.co.uk/content/duplicate-cleaner [digitalvolcano.co.uk]
Fast, but the old version (2.0) was better and freeware if you can still find a copy of it.
I am a dupe duper person (Score:2)
I have too many, due to simply being a messy pig and pedantic with files.
The best tool I've found is called Duplicate Cleaner - it's from Digital Volcano.
I do not work for / am not affiliated with these people.
I've used many tools over the years, DFL, Duplic8 and "Duplicate Files Finder" - one of which had a shitty bug which matched non identical files.
Duplicate cleaners algorithm is good and the UI, while not perfect, is one of the better ones at presenting the data. Especially identifying entire branch
5TB only why dedupe? (Score:4, Insightful)
You say the data is important enough that you don't want to nuke it. Wouldn't it be also true to say that the data that you've taken the trouble to copy more than once is likely to be important? So keep those dupes.
To me not being able to find stuff (including being aware of stuff in the first place) would be a bigger problem
Use a hashing tool (Score:2)
As many others have stated, use a tool that computes a hash of file contents. Coincidentally, I wrote one last week to do exactly this [blogspot.ca] when I was organizing my music folder. It'll interactively prompt you for which file to keep among the duplicates once it's finished scanning. It churns through about 30 GB of data in roughly 5 minutes. Not sure if it will scale to 4.2 million files, but it's worth a try!
Use DROID 6 (Score:5, Informative)
There is a digital preservation tool called DROID (Digital Record Object Identification) which scans all the files you ask it to, identifying their file type. It can also optionally generate an MD5 hash of each file it scans. It's available for download from sourceforge (BSD license, requires Java 6, update 10 or higher).
http://sourceforge.net/projects/droid/ [sourceforge.net]
It has a fairly nice GUI (for Java, anyway!), and a command line if you prefer scripting your scan. Once you have scanned all your files (with MD5 hash), export the results into a CSV file. If you like, you can first also define filters to exclude files you're not interested in (e.g. small files could be filtered out). Then import the CSV file into your data anlaysis app or database of your choice, and look for duplicate MD5 hashes. Alternetively, DROID actually stores its results in an Apache Derby database, so you could just connect directly to that rather than export to CSV, if you have a tool that an work with Derby.
One of the nice things about DROID when working over large datasets is you can save the progress at any time, and resume scanning later on. It was built to scan very large government datastores (multiple Tb). It has been tested over several million files (this can take a week or two to process, but as I say, you can pause at any time, save or restore, although only from the GUI, not the command line).
Disclaimer: I was responsible for the DROID 4, 5 and 6 projects while working at the UK National Archives. They are about to release an update to it (6.1 I think), but it's not available just yet.
Why it's taking so long... (Score:2)
So your de-dupe ran for a week before you cut it out? On a modern CPU, the de-dupe is limited not by the CPU speed (since deduplication basically just checksums blocks of storage), but by the speed of the drives.
What you need to do is put all this data onto a single RAID10 array with high IO performance. 5TB of data, plus room to grow on a RAID10 with decent IOPS would probably be something like 6 3TB SATA drives on a new array controller. Set up the array with a large stripe size to prioritize reads (write
A second box (Score:2)
At a superficial level, the issue would seem to be quite hard, but with a little planning it shouldn't be *that* hard.
My path would be to go out and build a new file server running either Windows Server or Linux, based on what OS your current file server uses, install the de-dupe tool of your choice from the many listed above, and migrate your entire file structure from your current box to the the new box - the de-dupe tools will work their magic as the files trip in over the network connection. Once de-dup
Here are a couple of ways, but... (Score:2)
This gives an sha256sum list of all files assuming you are in linux and writing it to list.sha256 in the base of your home folder:
You may replace sha256sum with another checksum routine if you want, such as. sha512sum, md5sum, sha1sum, or other preference.
now sort the file:
(notice, this create a sorted list according to the sha256 value but with the path to the file as
By the time you have sorted this out... (Score:4, Insightful)
...it will have cost you far more than simply buying another drive(s) if all you are really concerned about is space...
My own script (feel free to change) (Score:3)
My home-rolled solution to exactly this problem is: http://gnosis.cx/bin/find-duplicate-contents [gnosis.cx].
This script is efficient algorithmically and has a variety of options to work incrementally and to optimize common cases. It's not excessively user-friendly, possibly, but the --help screen gives reasonable guidance. And the whole thing is short and readable Python code (which doesn't matter for speed, since the expensive steps like MD5 are callouts to fast C code in the standard library).
Merge (Score:3)
Best tool. http://hungrycats.org/~zblaxell/dupemerge/faster-dupemerge [hungrycats.org] worked great for me in the past 10 years. Scales.
Re:Good free command line tool (Score:4, Interesting)
I recently had this problem and solved it with finddupe (http://www.sentex.net/~mwandel/finddupe/). It's a free command line tool. It can create hardlinks, you can tell it which is a master directory to keep and which directories to delete, and it can create a batch file to do actually do the deletion if you don't trust it or just want to see what it will do. Highly recommend. In any case, 5 TB is going to take forever but with finddupe you can be sure your time is not wasted, unlike one of the free tools that analyzed my drive for 12 hours and then told me it would only fix ten duplicates.
I tried this vs. Clone Spy, Fast Duplicate File Finder, Easy Duplicate File Finder, and the GPL Duplicate Files Finder (crashy). (Side note: Get some creativity guys). There's no UI but I don't care. It doesn't keep any state between runs so run it a few times on subdirectories to make sure you know what it's doing first then let it rip.
Re: (Score:2)
Worry? Multiple different resolutions serve a purpose - different resolution playback devices.
Re: (Score:3)
I wrote my own to do exactly this, thinking it would be vastly superior to anything I could have downloaded.
File size collisions are a lot more common than one would realize. Even the following algorithm takes a very long time to complete on any sizeable data source:
- Find all files, storing directory and filename as separate strings to prevent memory allocation isses (the path will be the same for lots of files, so keep it in memory once - a hashtable or binsearch or similar optimized storage makes this n
Re:Wait it out (Score:4, Insightful)
I will go out on a limb, risk my geek card and propose another alternative:
Windows Server 2012 has a deduplication feature which works atop of NTFS (not ReFS). Unlike "real" deduplication on the LVM level which you get with your EMC, the files are written to the filesystem fully "hydrated", and as time passes, a background task [1] sifts through the blocks, finds ones that are the same, then adds reparse points.
The reason I'm suggesting this is that if one already has a Windows file server, it might be good to slap on 2012 when it is available, configure deduplication on a dedicated storage volume, and let it do the dirty work on the block level for you.
Of course, ZFS is the most elegant solution, but it may not be the best in the application.
[1]: Fire up PowerShell and type in:
Start-DedupJob E: â"Type Optimization
if you want to do it in the foreground after setting it up, if you did a large copy and want to dedupe it all.
Re: (Score:3)
As the author of "same", I was going to post the above suggestion.
Last time I used "same", 4.2 million files was peanuts. Of course, running through 4.8Tb of data is going to take some time.
People above are doing suggestions like doing CRCs of the files. Checking filesizes. Etc etc. Same does all of this:
First a list is compiled of the files to be handled. Then each file is stat-ed to determine its size. Then only same-size files are considered candidates for being the same. Next if the filesizes are the sa