Analysis of Neural Cryptography
In this paper we analyse the security of a new key exchange protocol proposed in , which is based on mutually learning neural networks. This is a new potential source for public key cryptographic schemes which are not based on number theoretic functions, and have small time and memory complexities. In the first part of the paper we analyse the scheme, explain why the two parties converge to a common key, and why an attacker using a similar neural network is unlikely to converge to the same key. However, in the second part of the paper we show that this key exchange protocol can be broken in three different ways, and thus it is completely insecure.
KeywordsNeural Network Random Input Chaotic Synchronization Random Initial State Hebbian Learning Rule
- 1.M. Biehl, N. Caticha, “Statistical Mechnics of On-Line Learning and Generalization”, The Handbook of Brain Theory and Neural Networks, 2001.Google Scholar
- 2.John A. Clark, Jeremy L. Jacob, “Fault Injection and a Timing Channel on an Analysis Technique”, Proceedings of Eurocrypt 2002, p. 181.Google Scholar
- 4.M. Opper, W. Kinzel, “Statistical Mechanics of Generalization”, Models of Neural Networks III, 151–20, 1995.Google Scholar