Find duplicate files
fdf - a pretty clever Perl script to find duplicate files across directories. It is very simple and very fast. It doesn't use any checksum. It reads (through the File::Compare module) chunks of data from two files. If the chucks are not the same, we already know that the files can't be duplicates of each other. This is one reason why it is fast, but the other reason is that, if we have 3 duplicate files, let's say: "A", "B" and "C", we compare "A" with "B" and they are the same, then we compare "A" with "C", and are the same, we don't have to compare "B" with "C" because they are duplicates of "A", which tells us that they must be exactly the same. Yet another reason: files are grouped by file size. So we compare two files only if they have the same size. (Getting the size of a file it's a very simple and fast check). The script accepts one argument: either &