Skip to content

Conversation

emergie
Copy link

@emergie emergie commented Feb 18, 2018

Resolves #141

This change introduces Duplicate detection radio in which the user may choose suitable algorithm:

  • md5 & sha1 (default, current behaviour)
  • only md5
  • md5 of first 1M of file

image

warning tooltip:
image

fast test:

dd if=/dev/urandom of=random_1M_a bs=1M count=1
dd if=/dev/urandom of=random_1M_b bs=1M count=1
dd if=/dev/urandom of=random_1M_c bs=1M count=1
cat random_1M_a random_1M_b > sample_ab
cat random_1M_a random_1M_c > sample_ac
cp sample_ab sample_ab\ \(co\	p\"y\'\]

default md5&sha1:
image

unsafe md5 of first 1M:
image

@pixelb
Copy link
Owner

pixelb commented Feb 20, 2018

Thanks very much for taking the time to provide patches.
Have you timed the various modes on your data?

I presume you're not hitting a bottleneck from md5+sha1, since those combined would be less than the I/O bottleneck. Especially on spinning rust.

The shortcut of only checking the first 1MiB of each file could save a lot of course.
Do you have many large files that are the same size?
Did you notice the md5sum_approx script that's called by findup already?
Would you can the same benefit from bumping the 512 up to 1048576 there?
In retrospect, 512 is too small for modern systems anyway

@emergie
Copy link
Author

emergie commented Feb 20, 2018

  1. I haven't done any proper performance testing, but I'm actively battletesting this change on my data.

  2. Yes, I'm hitting the I/O bottleneck.
    Here is a sample of my search results, only files >=500M:
    selection_037
    Those files are accessed via ext4 -> encryption layer (LUKS) -> iscsi over 1GbE link to another box -> md raid1 -> hard disks.
    Pumping that amount of data through this setup to calculate md5+sha1 would take ages.

  3. I've seen md5sum_approx code. For my needs hashing only the first 512bytes is not very useful
    as a main duplicate verification algorithm.
    Additionally this check doesn't take file size into the account.
    In fslint every sieving step is independent.
    If A & B have sizes == n1 and C & D have sizes == n2 then all of them will pass
    the findup/print name, dev, inode & size step as potential duplicates.
    If A and C happen to have the same content on the first 512 bytes then the md5sum_approx step will match them despite of different sizes.
    I do have such samples in my data and that is why I added both file size & md5 hash printing in file_size_1m_md5sum.

  4. 1M data sample size was chosen arbitrarily, mostly because it's round and fits my needs.
    I'm not sure if it is a best choice for everyone (probably not), but it is something to start with.

  5. I'm wondering if I placed UI controls in the right place.
    Right now it is a radio in Advanced search parameters tab.
    Maybe it should be in Duplicates tab, a dropdown/comobobox right of Minimum file size input?

@pixelb
Copy link
Owner

pixelb commented Feb 20, 2018

Right md5sum_approx is only used to quickly exclude potential duplicates.
If you bump up the 512 -> 1MiB it might help exclude more, while still being safe.
I.E. current sieving steps are:

hard_links -> file_size -> md5(512) -> md5(all) -> sha1(all)

You're proposing:

hard_links -> file_size -> md5(1MiB)

BTW are there many hardlinked files but were sure they were always in disparate groups, then you could enable merge_early in the findup script to improve sieving to a single member of each hardlink group

@emergie
Copy link
Author

emergie commented Feb 23, 2018

Yes, that is my proposition from this pull request - to add 2 modes:

Original fslint behaviour would be preserved as default - after changes from this PR md5+sha1 pass is the default mode of duplicate verification.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants