Time-Memory Trade-Offs for Near-Collisions

Conference paper

DOI: 10.1007/978-3-662-43933-3_11

Part of the Lecture Notes in Computer Science book series (LNCS, volume 8424)
Cite this paper as:
Leurent G. (2014) Time-Memory Trade-Offs for Near-Collisions. In: Moriai S. (eds) Fast Software Encryption. FSE 2013. Lecture Notes in Computer Science, vol 8424. Springer, Berlin, Heidelberg


In this work we consider generic algorithms to find near-collisions for a hash function. If we consider only hash computations, it is easy to compute a lower-bound for the complexity of near-collision algorithms, and to build a matching algorithm. However, this algorithm needs a lot of memory, and makes more than \(2^{n/2}\) memory accesses. Recently, several algorithms have been proposed without this memory requirement; they require more hash evaluations, but the attack is actually more practical. They can be divided in two main categories: they are either based on truncation, or based on covering codes.

In this paper, we give a new insight to the generic complexity of a near-collision attack. First, we consider time-memory trade-offs for truncation-based algorithms. For a practical implementation, it seems reasonable to assume that some memory is available and we show that taking advantage of this memory can significantly reduce the complexity. Second, we show a new method combining truncation and covering codes. The new algorithm is always at least as good as the previous works, and often gives a significant improvement. We illustrate our results by giving a 10-near collision for MD5: our algorithm has a complexity of \(2^{45.4}\) using 1 TB of memory while the best previous algorithm required \(2^{52.5}\) computations.


Hash function Near-collision Generic attack Time-memory trade-off 

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  1. 1.UCL Crypto GroupLouvain-la-NeuveBelgium

Personalised recommendations