Differential Cryptanalysis of SipHash
Abstract
SipHash is an ARX based message authentication code developed by Aumasson and Bernstein. SipHash was designed to be fast on short messages. Already, a lot of implementations and applications for SipHash exist, whereas the cryptanalysis of SipHash lacks behind. In this paper, we provide the first published thirdparty cryptanalysis of SipHash regarding differential cryptanalysis. We use existing automatic tools to find differential characteristics for SipHash. To improve the quality of the results, we propose several extensions for these tools to find differential characteristics. For instance, to get a good probability estimation for differential characteristics in SipHash, we generalize the concepts presented by Mouha et al. and Velichkov et al. to calculate the probability of ARX functions. Our results are a characteristic for SipHash24 with a probability of \(2^{236.3}\) and a distinguisher for the Finalization of SipHash24 with practical complexity. Even though our results do not pose any threat to the security of SipHash24, they significantly improve the results of the designers and give new insights in the security of SipHash24.
Keywords
Message authentication code MAC Cryptanalysis Differential cryptanalysis SipHash Sfunctions Cyclic Sfunctions1 Introduction
A message authentication code (MAC) is a cryptographic primitive, which is used to ensure the integrity and the origin of messages. Normally, a MAC takes a secret key \(K\) and a message \(M\) as input and produces a fixed size tag \(T\). A receiver of such a messagetagpair verifies the authenticity of the message by simply recalculating the tag \(T\) for the message and compare it with the received one. If the two tags are the same, the origin of the message and its integrity are ensured.
SipHash [1] was proposed by Aumasson and Bernstein due to the lack of MACs, which are fast on short inputs. Aumasson and Bernstein suggest two main fields of application for SipHash. The first application is as replacement for noncryptographic hash functions used in hashtables and the second application is to authenticate network traffic. The need for a fast MAC used in hashtables arises from the existence of a denialofservice attack called “hash flooding” [1]. This attack uses the fact, that is easy to find collisions for noncryptographic hash functions. With the help of these collision producing inputs, an attacker is able to degenerate hashtables to e.g. linked lists. Such a degeneration increases the time to perform operations like searching and inserting elements drastically and can lead to denial of service attacks.
So far, SipHash is already implemented in many applications. For example, SipHash is used as hash() in Python on all major platforms, in the dnschache instances of OpenDNS resolvers and in the hashtable implementation of Ruby. Besides these mentioned applications, other applications and dozens of thirdparty implementations of SipHash can be found on the SipHash website^{1}.
In this paper, we provide the first external security analysis regarding differential cryptanalysis. To find differential characteristics, we adapt techniques originally developed for the analysis of hash functions to SipHash. Using differential cryptanalysis to find collisions for hash functions has become very popular since the attacks on MD5 and SHA1 by Wang et al. [15, 16]. As a result, a number of automated tools have been developed to aid cryptographers in their search for valid characteristics [6, 7, 8]. For hash functions, the probability of a characteristic does not play an important role since message modification can be used to improve the probability and create collisions. However, this is not possible for keyed primitives like MACs. Therefore, we have to modify existing search tools to take the probability of a characteristic into account. With the help of these modified tools, we are able to improve the quality of the results for SipHash.
In cryptographic primitives consisting solely out of modular additions, rotations and xors (like SipHash), only the modular addition might contribute to the probability of a differential characteristic, if xor differences are considered for representation. A method to calculate the exact differential probability of modular additions is presented by Mouha et al. [9]. In constructions like SipHash, modular additions, rotations and xors interact together. Hence, the characteristic uses many intermediate values of the single rounds and is therefore divided into many small sections. To get a more exact prediction of the probability for the characteristic, it would be nice to calculate the probability of subfunctions combining modular additions, rotations and xors. Therefore, we introduce the concept of cyclic Sfunctions. This concept is a generalization of the work done by Mouha et al. [9] and Velichkov et al. [13] for generalized conditions [3]. Although all the basic concepts needed to create cyclic Sfunctions are already included in the work of Velichkov et al. [13], we do not think that the generalization to generalized conditions is trivial, since we have not seen a single use of it. Cyclic Sfunctions will help analysts and designers of ARX based cryptographic primitives to provide closer bounds for the probability of differential characteristics.
Best found characteristics.
Instance  Type  Probability  Reference 

SipHash24  High probability  \(2^{498}\)  [1] 
SipHash24  High probability  \(2^{236.3}\)  Sect. 5.1 
SipHash2x  Internal collision  \(2^{236.3}\)  Sect. 5.1 
SipHash1x  Internal collision  \(2^{167}\)  Sect. 5.1 
4 Round Finalization  High probability  \(2^{35}\)  Sect. 5.2 
The paper starts with a description of SipHash in Sect. 2. The following Sect. 3 explains the basic concepts and strategies used by us to search for differential characteristics. Section 4 deals with improvements of automatic search techniques to find suitable characteristics for SipHash. Finally, the most significant differential characteristics for SipHash found by us are presented in Sect. 5. Further results on SipHash are given in Appendix A.
2 Description of SipHash

Initialization. The internal state \(V\) of SipHash consists of the four 64bit words \(V_a\), \(V_b\), \(V_c\) and \(V_d\). The initial value consists of the ASCII representation of the string “somepseudorandomlygeneratedbytes” and is written to the internal state first. Then, the 128bit key \(K=K_1\mathop {\Vert }K_0\) is xored to the state words \(V_a\mathop {\Vert }V_b\mathop {\Vert }V_c\mathop {\Vert }V_d = V_a\mathop {\Vert }V_b\mathop {\Vert }V_c\mathop {\Vert }V_d \oplus K_0\mathop {\Vert }K_1\mathop {\Vert }K_0\mathop {\Vert }K_1\).

Compression. The message \(M\) is padded with as many zeros as needed to reach multiple block length minus 1 byte. Then, one byte, which encodes the length of the message modulo 256 is added to get a multiple of the block length. Afterwards, the message is split into \(t\) 8byte blocks \(M_1\) to \(M_{t}\). The blocks \(M_i\) are in littleendian encoding. For each block \(M_i\), starting with block \(M_1\), the following is performed. The block \(M_i\) is xored to \(V_d\). After that the SipRound function is performed \(c\) times on the internal state. Then the block \(M_i\) is xored to \(V_a\).

Finalization. After all message blocks have been processed, the constant \(\text {ff}_{16}\) is xored to \(V_c\). Subsequently, \(d\) iterations of SipRound are performed. Finally, \(V_a \oplus V_b \oplus V_c \oplus V_d\) is used as the MAC value \(h_K = \text {SipHashcd}(K,M)\).
Now, we will discuss our naming scheme for the different variables involved in SipHash. In Fig. 1 one SipRound is shown. We will indicate a specific bit of a word by \(V_{a,m,r}[i]\), where \(i=\{0,...,63\}\) denotes the specific bit position of a word, \(m\) denotes the message block index, and \(r\) denotes the specific SipRound. Hence, to process the first message block, the input to the first SipRound is denoted by \(V_{a,1,1}\), the intermediate variables by \(A_{a,1,1}\), and the output by \(V_{a,1,2}\). Words, which take part in the Finalization, are indicated with \(m=f\).
3 Automatic Search for Differential Characteristics
3.1 Generalized Conditions
We use generalized (onebit) conditions introduced by De Cannière and Rechberger [3] to represent the differential characteristics within the automatic search tool. With the help of the 16 generalized conditions, we are able to express every possible condition on a pair of bits. For instance the generalized condition x denotes unequal values,  denotes equal values and ? denotes that every value for a pair of bits is possible.
In addition, we also use generalized twobit conditions [6]. Using these conditions, every possible combination of a pair of two bits \(\left \varDelta x, \varDelta y \right \) can be represented. In the most general form, these two bits can be any two bits of a characteristic. In our case, such twobit conditions are used to describe differential information on carries, when computing the probability using cyclic Sfunctions.
3.2 Propagation of Conditions
Single conditions of a differential characteristic are connected via functions (additions, rotations, xors). Thus, the concrete value of a single condition affects other conditions. To be more precise, the information that certain values on a condition are allowed may lead to the effect, that certain values on other conditions are impossible. Therefore, we are able to remove impossible values on those conditions to refine their values. We can say that information propagates.
Within the automatic search tool, we do this propagation in a bitsliced manner like it is shown in [7]. This means, that we split the functions into single bitslices and brute force them by trying all possible combinations allowed by the generalized conditions. In this way, we are able to remove impossible combinations.
For the performance of the whole search for characteristics, it is crucial to find a suitable “size” of the subfunctions of a specific cryptographic primitive, which are used to perform propagation. The “size” of such a subfunction determines how many different conditions are involved during brute forcing a single bitslice. On one hand, “big” functions (many conditions involved in one bitslice) make the propagation slower. On the other hand, the amount of information that propagates is usually enhanced by using a few “big” subfunctions instead of many “small” ones. Generalized conditions are not able to represent every information that is gathered during propagation(mainly due to effects regarding the carry of the modular addition) [6]. So we loose information between single subfunctions. Usually, less information is lost if the subfunctions are “bigger”. In fact, finding a good tradeoff between speed and quality of propagation is not trivial.
3.3 Basic Search Strategy

Find a good starting point for the search.

Search for a good characteristic.

Use message modification to find a colliding message pair.

Decision (Guessing). In the guessing phase, a bit is selected, which condition is refined. This bit can be selected randomly or according to a heuristic.

Deduction (Propagation). In this stage, the effects of the previous guess on other conditions is determined (see Sect. 3.2).

Backtracking (Correction). If a contradiction is determined during the deduction stage the contradiction is tried to be resolved in this stage. A way to do this is to jump back to earlier stages of the search until the contradiction can be resolved.
4 Improvements in the Automatic Search for SipHash
We have used existing automatic search tools to analyze SipHash. Those tools use the search strategy describe in Sect. 3.3. This strategy has been developed to find collisions for hash functions. It turns out that this strategy is unsuitable for keyed primitives like MACs. Therefore, we extend the search strategy to the greedy strategy described in Sect. 4.1. This greedy strategy uses information on the probability of characteristics, or on the impact of one guess during the search for characteristics. We have created all the results of Sect. 5 with the help of this greedy strategy. To get closer bounds on the probability of the characteristic, we generalize the concepts presented by Mouha et al. [9] and Velichkov et al. [13] to cyclic Sfunctions (Sect. 4.2).
Another important point in the automatic search for differential characteristics is the representation of the cryptographic primitives within the search tool. We have evaluated dozens of different descriptions and present the most suitable in Sect. 4.3.
4.1 Extended Search Strategy
Our search strategy extends the strategy used in [7]. The search algorithm of Sect. 3.3 is split in three main parts decision (guessing), deduction (propagation), and backtracking (correction). We have extended this strategy to perform greedy searches using quality criteria like the probability. In short, we perform the guessing and propagation phase several times on the same characteristic. After that, we evaluate the resulting characteristics and take the characteristic with the best probability. Then, the next iteration of the search starts. The upcoming algorithm describes the search in more detail:
Let \(U\) be a set of bits with condition ? in the current characteristic \(A\). In \(H\) we store all characteristics, which have been visited during the search. \(L\) is a set of candidate characteristics for \(A\). \(n\) is the number of guesses. \(B_\mathtt{ }\) and \(B_\mathtt{x }\) are characteristics.
 1.
Generate \(U\) from \(A\). Clear \(L\). Set \(i\) to 0.
 2.
Pick a bit from \(U\).
 3.
Restrict this bit in \(A\) to  to get \(B_\mathtt{ }\) and to x to get \(B_\mathtt{x }\).
 4.
Perform propagation on \(B_\mathtt{ }\) and \(B_\mathtt{x }\).
 5.
If \(B_\mathtt{ }\) and \(B_\mathtt{x }\) are inconsistent, mark bit as critical and go to Step 13, else continue.
 6.
If \(B_\mathtt{ }\) is not inconsistent and not in \(H\), add it to \(L\). Do the same for \(B_\mathtt{x }\).
 7.
Increment \(i\).
 8.
If \(i\) equals \(n\), continue with Evaluation. Else go to Step 2.
 9.
Set \(A\) to the characteristic with the highest probability in \(L\).
 10.
Add \(A\) to \(H\).
 11.
If there are no ? in \(A\), output \(A\). Then set \(A\) to a characteristic of \(H\).
 12.
Continue with Step 1.
 13.
Jump back until the critical bit can be resolved.
 14.
Continue with Step 1.
Due to performance reasons, we store hash values of characteristics in \(H\). In addition, we maintain a second list \(H^*\). In this list, we store the next best characteristics of \(L\) according to a certain heuristic. The characteristics of \(H^*\) are also used for backtracking. If a characteristic is found and \(U\) is empty, we take a characteristic out of \(H^*\) instead of \(H\) in Step 11. After a while, we perform a soft restart, where everything is set to the initial values (also \(H^*\) is cleared) except for \(H\).
This search strategy turns out to be good if we search for high probability characteristics, which do not lead to collisions. When searching for colliding characteristics, we have to adapt the given algorithm and perform a best impact strategy similar to Eichlseder et al. [4].
The best impact strategy differs in the following points from the strategy described above. In the best impact strategy, we do not calculate the characteristic \(B_\mathtt{x }\). Instead of taking the probability of \(B_\mathtt{ }\) as a quality criterion for the selection, we use the variant of \(B_\mathtt{ }\) of the candidate list \(L\), where the most information propagates. As a figure of merit for the amount of information that propagates, we take the number of conditions with value ?, which have changed their value due to the propagation.
This best impact strategy has several advantages. The first one lies in the fact that mostly guesses will be made, which have a big impact on the characteristic. This ensures that no guesses are made, where nothing propagates. Such guesses often imply additional restrictions on the characteristic, which are not necessary. In addition, the big impact criterion also leads to rather sparse characteristics, which usually have a better probability than dense ones.
4.2 Calculating the Probability Using Cyclic SFunctions
In this section, we show a method to extend the use of Sfunctions [9] by introducing state mapping functions \(m_i\) and making the relationship between the states cyclic. For instance such cyclic states occur if rotations work together with modular additions. Velichkov et al. showed in [13] how to calculate the additive differential probability of ARX based functions. The method of cyclic Sfunctions is closely related to the methods shown in [13].
Note that every classic Sfunction can be transformed into a cyclic Sfunction by defining every \(m_i\) as the identity function except for \(m_n\). The function \(m_n\) maps every value of \(S_o[n]\) to the state \(S_i[0] = 0\).

\(S_o[1] \Rightarrow S_i[1]: \left v_a, v_b\right \Rightarrow \left 0, v_b\right \)

\(S_o[4] \Rightarrow S_i[0]: \left v_a, v_b\right \Rightarrow \left v_a, 0\right \).
For a word length of \(n\) and a general rotation to the left by \(r\), the state is \(S[i] = \left c_a[(ir)\mod n], c_b[i]\right \), except for states, where \(m_i\) is not the identity function. These are the states \(S_o[r] = \left c_a[n], c_b[r]\right \), \(S_i[r] = \left c_a[0], c_b[r]\right \), \(S_o[n] = \left c_a[nr], c_b[n]\right \) and \(S_i[0] = \left c_a[nr], c_b[0]\right \). The realization of additions with multiple rotations in between leads to more mapping functions \(m_i\), which are not the identity function. Using additions with more inputs leads to bigger carries and bigger states.
Using Graphs for Description. Similar to Sfunctions [9], we can build a graph representing the respective cyclic Sfunction. The vertices in the graph stand for the single distinct states and the circles in the graph represent valid solutions. Such a graph can be used to either propagate conditions, or to calculate the differential probability. An illustrative example for propagation and the probability calculation can be found in Appendix B.
The whole cyclic graph consists of subgraphs \(i\). Each subgraph \(i\) consists of vertices representing \(S_i[i1]\), and \(S_o[i]\) and single edges connecting them. So each subgraph represents a single bitslice of the whole function. For the system in Fig. 3, the edges of each subgraph are calculated by trying every possible pair of input bits for \(a[((ir1)\mod n) + 1]\), \(b[((ir1)\mod n) + 1]\) and \(c[i]\), which is given by their generalized conditions and using every possible carry of the set of \(S_i[i1]\) to get an output \(s_b[i]\) and a carry which belongs to \(S_o[i]\). If the output is valid (with respect to the generalized conditions, which describe the possible values for \(s_b\)), an edge can be drawn from the respective value of the input vertex of \(S_i[i1]\) to the output vertex belonging to \(S_o[i]\). Such a subgraph can be created for every bitslice.
Now, we have to form a graph out of these subgraphs. Subgraphs connected over a state mapping function \(m_i\), which is the identity, stay the same. There exist two ways for connecting subgraphs \(i\) and \(i+1\), which are separated by a state mapping function. Either the edges of graph \(i\) can be redrawn, so that they follow the mapping from \(S_o[i]\) to \(S_i[i]\), or the edges of graph \(i+1\) can be redrawn so that they follow the inverse mapping from \(S_i[i]\) to \(S_o[i]\). We call the so gathered set of subgraphs “transformed subgraphs”. After all subgraphs are connected, we can read out the valid input output combinations. These combinations are minimal circles in the directed graph. Since we are aware of the size of the minimal circles and of the shape of the graph, we can transform the search for those circle in a search for paths.
We consider the presented method based on cyclic Sfunctions to be equivalent to brute force and therefore to be optimal. The equivalence is only given if the words of the input and the output are independent of each other. For example, if the same input is used twice in the same function \(f\), we do not have the required independence. Such a case is the calculation of \(s = a + (a\lll 10)\).
4.3 Bitsliced Description of SipHash
5 Results
In this section, we give some results using the presented search strategies and the new probability calculation. At first, we start with characteristics, which lead to internal collisions. This type of characteristic can be used to create forgeries as described in [10, 11]. To improve this attack, characteristics are needed, which have a probability higher than \(2^{128}\) (in the case of SipHash). Otherwise, a birthday attack should be preferred to find collisions. We are able to present characteristics that lead to an internal collision for SipHash1x (\(2^{167}\)) and SipHash2x (\(2^{236.3}\)). The characteristic for SipHash2x is also the best published characteristic for full SipHash24.
The last part of this section deals with a characteristic for the Finalization of SipHash24. This characteristic has a considerable high probability of \(2^{35}\). Due to this high probability, this characteristic can be used as a distinguisher for the Finalization.
5.1 Colliding Characteristics for SipHash1x and SipHash2x
First, we want to start with an internal collision producing characteristic for SipHash1x. We have achieved the best result with the biggest impact strategy by using a starting point consisting of 7 message blocks. The bits of the first message block, the key, and the last state values are set to . The rest of the characteristic is set to ?. We introduce one difference in a random way by picking a bit out of all \(A_a\), \(A_c\), \(V_a\), and \(V_c\). This strategy results in a characteristic with an estimated probability of \(2^{169}\). The characteristic leads to an internal collision within 3 message blocks.
Characteristic for SipHash1x, which leads to an internal collision (probability \(2^{167}\)).

Characteristic for SipHash2x, which leads to an internal collision (probability of \(2^{236.3}\)). This is also the best characteristic for SipHash24.

Although both characteristics do not have a probability higher than \(2^{128}\), they are the best collision producing characteristics published for SipHash so far. Especially the characteristic for SipHash1x (Table 2) is not far away from the bound of \(2^{128}\), where it gets useful in an attack. Moreover, the characteristic for SipHash2x (Table 3) is the best published characteristic for full SipHash24, with a probability of \(2^{236.3}\). The previous best published characteristic for SipHash24 has a probability of \(2^{498}\) [1]. Nevertheless, SipHash24 still has a huge security margin.
5.2 Characteristic for Finalization of SipHash24
Distinguisher for 4 finalization rounds (probability \(2^{35}\)).

Considering this new result, we are able to distinguish both building blocks of SipHash24, the Compression and the Finalization from idealized versions (It is already shown in [1] that two rounds of the Compression are distinguishable). However, these two results do not endanger the full SipHash24 function, which is still indistinguishable from a pseudorandom function.
6 Conclusion
This work deals with the differential cryptanalysis of SipHash. To be able to find good results, we had to introduce new search strategies. Those search strategies extend previously published strategies, which have solely been used in the search for collisions for hash functions. With the new presented concepts, also attacks on other primitives like MACs, blockciphers and streamciphers are within reach.
Furthermore, we generalized the concept of Sfunctions to the concept of cyclic Sfunctions. With the help of cyclic Sfunctions, cryptanalysts will be able to get more exact results regarding the probability of differential characteristics for ARX based primitives.
With these new methods, we were able to improve upon the existing results on SipHash. Our results include the first published characteristics resulting in internal collisions, the first published distinguisher for the Finalization of SipHash and the best published characteristic for full SipHash24.
Future work includes to apply the greedy search strategies to other ARX based primitives. Such cryptographic primitives may be for instance block ciphers or authenticated encryption schemes. Also, the further improvement of the used automatic search tools is part of future work.
Footnotes
Notes
Acknowledgments
The work has been supported by the Austrian Government through the research program FITIT Trust in IT Systems (Project SePAG, Project Number 835919).
Supplementary material
References
 1.Aumasson, J.P., Bernstein, D.J.: SipHash: a fast shortinput PRF. In: Galbraith, S., Nandi, M. (eds.) INDOCRYPT 2012. LNCS, vol. 7668, pp. 489–508. Springer, Heidelberg (2012)CrossRefGoogle Scholar
 2.Cramer, R. (ed.): EUROCRYPT 2005. LNCS, vol. 3494. Springer, Heidelberg (2005)zbMATHGoogle Scholar
 3.De Cannière, C., Rechberger, C.: Finding SHA1 characteristics: general results and applications. In: Lai, X., Chen, K. (eds.) ASIACRYPT 2006. LNCS, vol. 4284, pp. 1–20. Springer, Heidelberg (2006)CrossRefGoogle Scholar
 4.Eichlseder, M., Mendel, F., Schläffer, M.: Branching heuristics in differential collision search with applications to SHA512. IACR Cryptology ePrint Archive 2014:302 (2014)Google Scholar
 5.Klima, V.: Tunnels in hash functions: MD5 collisions within a minute. IACR Cryptology ePrint Archive 2006:105 (2006)Google Scholar
 6.Leurent, G.: Construction of differential characteristics in ARX designs application to Skein. In: Canetti, R., Garay, J.A. (eds.) CRYPTO 2013, Part I. LNCS, vol. 8042, pp. 241–258. Springer, Heidelberg (2013)CrossRefGoogle Scholar
 7.Mendel, F., Nad, T., Schläffer, M.: Finding SHA2 characteristics: searching through a minefield of contradictions. In: Lee, D.H., Wang, X. (eds.) ASIACRYPT 2011. LNCS, vol. 7073, pp. 288–307. Springer, Heidelberg (2011)CrossRefGoogle Scholar
 8.Mendel, F., Nad, T., Schläffer, M.: Improving local collisions: new attacks on reduced SHA256. In: Johansson, T., Nguyen, P.Q. (eds.) EUROCRYPT 2013. LNCS, vol. 7881, pp. 262–278. Springer, Heidelberg (2013)CrossRefGoogle Scholar
 9.Mouha, N., Velichkov, V., De Cannière, C., Preneel, B.: The differential analysis of Sfunctions. In: Biryukov, A., Gong, G., Stinson, D.R. (eds.) SAC 2010. LNCS, vol. 6544, pp. 36–56. Springer, Heidelberg (2011)CrossRefGoogle Scholar
 10.Preneel, B., van Oorschot, P.C.: MDxMAC and building fast MACs from hash functions. In: Coppersmith, D. (ed.) CRYPTO 1995. LNCS, vol. 963, pp. 1–14. Springer, Heidelberg (1995)Google Scholar
 11.Preneel, B., van Oorschot, P.C.: On the Security of Iterated Message Authentication Codes. IEEE Trans. Inf. Theory 45(1), 188–199 (1999)MathSciNetCrossRefzbMATHGoogle Scholar
 12.Sugita, M., Kawazoe, M., Perret, L., Imai, H.: Algebraic cryptanalysis of 58Round SHA1. In: Biryukov, A. (ed.) FSE 2007. LNCS, vol. 4593, pp. 349–365. Springer, Heidelberg (2007)CrossRefGoogle Scholar
 13.Velichkov, V., Mouha, N., De Cannière, C., Preneel, B.: The additive differential probability of ARX. In: Joux, A. (ed.) FSE 2011. LNCS, vol. 6733, pp. 342–358. Springer, Heidelberg (2011)CrossRefGoogle Scholar
 14.Wang, X., Lai, X., Feng, D., Chen, H., Yu, X.: Cryptanalysis of the Hash Functions MD4 and RIPEMD. In: Cramer, R. [2], pp. 1–18Google Scholar
 15.Wang, X., Yin, Y.L., Yu, H.: Finding collisions in the full SHA1. In: Shoup, V. (ed.) CRYPTO 2005. LNCS, vol. 3621, pp. 17–36. Springer, Heidelberg (2005)CrossRefGoogle Scholar
 16.Wang, X., Yu, H.: How to break MD5 and other hash functions. In: Cramer, R. [2], pp. 19–35Google Scholar