Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

many iterations on a hash: doesn't it reduces entropy?

I see this technique recommended in many places (including stack), and i can't get out of my head that this would reduce entropy! After all, you are hashing something again, that has already been hashed and has a collision chance. Wouldn't collision chance over collision chance results in more collision chances? After researching, it seems I'm wrong, but why?

like image 566
Hugo Mota Avatar asked Apr 04 '12 12:04

Hugo Mota


People also ask

Does hashing reduce entropy?

If the hash function is collision resistant, hashing does not change the entropy of the key.

What is hash entropy?

The true measure of the actual information within a hashed value is measured by entropy. This determines the actual amount of information contained in the data, and it is measured in bits. If the hash is truly random, the number of bits in the hash will be the entropy value.

How do you reduce a hash collision?

If two records are being directed to the same cell, both would go into that cell as a linked list. This efficiently prevents a hash collision from occurring since records with the same hash values can go into the same cell, but it has its disadvantages.

What is the most efficient hashing algorithm?

Probably the one most commonly used is SHA-256, which the National Institute of Standards and Technology (NIST) recommends using instead of MD5 or SHA-1. The SHA-256 algorithm returns hash value of 256-bits, or 64 hexadecimal digits.


1 Answers

Since you tagged md5, I'll use that as an example. From wikipedia:

if two prefixes with the same hash can be constructed, a common suffix can be added to both to make the collision more likely to be accepted as valid data by the application using it. Furthermore, current collision-finding techniques allow to specify an arbitrary prefix: an attacker can create two colliding files that both begin with the same content. All the attacker needs to generate two colliding files is a template file with a 128-byte block of data, aligned on a 64-byte boundary that can be changed freely by the collision-finding algorithm. An example MD5 collision, with the two messages differing in 6 bits, is:

And then the example plaintexts they give are 256 bytes long. Since the collision attack relies on a 128 byte block of data, and the hash digest is only 128 bits, there really isn't an increased risk of a collision attack succeeding beyond the first iteration - that is to say that you can't really influence the likelihood of a collision beyond the first hash.

Also consider that the entropy of the hash is the aforementioned 128 bits. Even considering that the total collision chance is only 2^20.96 (again from wikipedia), it would take a great number of iterations to cause two inputs to collide. The first-glance reasoning that I think you're falling victim to is:

  • Any two arbitrary inputs have a chance of collision x%.
  • The outputs of the first hash are two such inputs themselves.
  • Therefore, every iteration increases the chance of collision by x%.

This can be disproven by counterexample fairly easily. Consider again MD5:

  • The chance for collision of two inputs is 1:2^21 (taking the worst case scenario from wikipedia's cryptography analysis of MD5)
  • Hashing again causes an equally likely chance of collision to compound, therefore the chance of collision at the second round is 1:2^20
  • Therfore, for any two inputs hashed a number of times equal to the entropy of the digest are guaranteed to collide.

MD5 any two inputs 128 times in a row and you will see that this is not true. You probably won't find a single repeated hash between them - after all, you've only created 256 out of a possible 2^128 hash values, leaving 2^120 possibilities. The probabilities of collisions between each round is independent of all other rounds.

There are two approaches to understand why this is so. The first is that each iteration is essentially trying to hit a moving target. I think you could construct a proof based on the birthday paradox that there is a surprisingly low number of iterations of hashing where you will likely see one hash digest from one input match the hash digest of a different input. But they would almost certainly have occurred at different steps of the iteration. And once that occurs, they can never have the same output on the same iteration because the hash algorithm itself is deterministic.

The other approach is to realize that the hash function actually adds entropy while it runs. Consider that an empty string has a 128 bit digest just like any other input; that cannot occur without entropy being added during the algorithm steps. This is actually a necessarily part of a cryptographic hash function: data must be destroyed or else the input could be recovered from the digest. For inputs longer than the digest, yes, entropy is lost on the whole; it has to be in order to fit into the digest length. But some entropy is also added.

I don't have as exact numbers for other hash algorithms, but I think all the points I've made generalize to other hash functions and one-way / mapping functions.

like image 183
Patrick M Avatar answered Nov 09 '22 09:11

Patrick M