Skip to main content

Hopfield Networks

  • Chapter
  • First Online:

Part of the book series: Texts in Computer Science ((TCS))

Abstract

In the preceding Chaps. 5 to 7 we studied so-called feed forward networks . that is, networks with an acyclic graph (no directed cycles). In this and the next chapter, however, we turn to so-called recurrent networks , that is, networks, the graph of which may contain (directed) cycles.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    In linear algebra one usually studies the inverse problem, that is, given a matrix, one tries to find the eigenvalues and eigenvectors.

References

  • D.H. Ackley, G.E. Hinton, T.J. Sejnowski. A Learning Algorithm for Boltzmann Machines. Cogn. Sci. 9(1):147–169 (Elsevier Science, Amsterdam, 1985)

    Google Scholar 

  • J.A. Anderson, E. Rosenfeld, Neurocomputing: Foundations of Research (MIT Press, Cambridge, 1988)

    Google Scholar 

  • C. Andrieu, N. De Freitas, A. Doucet, M.I. Jordan. An introduction to MCMC for machine Learning. Mach. Learn. 50:5–43 (Kluwer, Dordrecht, 2003)

    Google Scholar 

  • Y. Freund, D. Haussler. Unsupervised Learning of Distributions on Binary Vectors using Two Layer Networks. Advances in Neural Information Processing Systems 4 (Morgan Kaufmann, San Mateo, 1992), pp. 912–919

    Google Scholar 

  • W. Greiner, L. Neise, H. Stöcker. Thermodynamik und Statistische Mechanik (Series: Theoretische Physik). Verlag Harri Deutsch, Thun, Frankfurt am Main, Germany, English edition: Thermodynamics and Statistical Physics (Springer, Berlin, 1987) (2000)

    Google Scholar 

  • S. Haykin, Neural Networks and Learning Machines (Prentice Hall, Englewood Cliffs, 2008)

    Google Scholar 

  • D.O. Hebb. The Organization of Behaviour (Wiley, New York, 1949). Chap. 4: The First Stage of Perception: Growth of an Assembly reprinted in [Anderson und Rosenfeld 1988], pp. 45–56

    Google Scholar 

  • G. Hinton, T.J. Sejnowski, in Learning and Relearning in Boltzmann Machines, ed. by D.E. Rumelhart, J.L. McClelland (1986), pp. 282–317

    Google Scholar 

  • G.E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Comput. 14(8):1771–1800 (MIT Press, Cambridge, 2002)

    Google Scholar 

  • G.E. Hinton, S. Osindero, Y.W. Teh. A fast learning algorithm for deep belief nets. Neural Comput. 18(7):1527–1554 (MIT Press, Cambridge, 2006)

    Google Scholar 

  • G.E. Hinton. A Practical Guide to Training Restricted Boltzmann Machines. Technical report 2010-003, Department of Computer Science, University of Toronto, Canada (2010)

    Google Scholar 

  • J.J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proc. Nat. Acad. Sci. 79:2554–2558(1982)

    Google Scholar 

  • J.J. Hopfield. Neurons with Graded response have collective computational properties like those of two-state neurons. Proc. Nat. Acad. Sci. 81:3088–3092 (1984)

    Google Scholar 

  • J. Hopfield, D. Tank. “Neural” Computation of decisions in optimization problems. Biol. Cybern. 52:141–152 (Springer, Heidelberg, 1985)

    Google Scholar 

  • E. Ising. Beitrag zur Theorie des Ferromagnetismus. Zeitschrift für Physik 31(253) (1925)

    Google Scholar 

  • S. Kirkpatrick, C.D. Gelatt, M.P. Vercchi. Optimization by simulated annealing. Science 220:671–680 (High Wire Press, Stanford, 1983)

    Google Scholar 

  • S. Kullback, R.A. Leibler. On information and sufficiency. Ann. Math. Stat. 22:79–86 (Institute of Mathematical Statistics, Hayward, 1951)

    Google Scholar 

  • N. Metropolis, N. Rosenblut, A. Teller, E. Teller. Equation of state calculations for fast computing machines. J. Chem. Phys. 21:1087–1092 (American Institute of Physics, Melville, 1953)

    Google Scholar 

  • R. Rojas, Theorie der neuronalen Netze – Eine systematische Einführung (Springer, Berlin, 1996)

    Google Scholar 

  • D.E. Rumelhart, J.L. McClelland (eds.) Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 1: Foundations (MIT Press, Cambridge, 1986)

    Google Scholar 

  • P. Smolensky, Information Processing in Dynamical Systems: Foundations of Harmony Theory, ed. by Rumelhart, D.E, McClelland, J.L (1986) pp. 194–281

    Google Scholar 

  • T. Tieleman, Training restricted Boltzmann machines, using approximations to the likelihood gradient. Proceedings 21st International Conference on Machine Learning (ICML, Helsinki, Finland) (ACM Press, New York, 2008)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Rudolf Kruse , Christian Borgelt , Christian Braune , Sanaz Mostaghim or Matthias Steinbrecher .

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer-Verlag London

About this chapter

Cite this chapter

Kruse, R., Borgelt, C., Braune, C., Mostaghim, S., Steinbrecher, M. (2016). Hopfield Networks. In: Computational Intelligence. Texts in Computer Science. Springer, London. https://doi.org/10.1007/978-1-4471-7296-3_8

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-7296-3_8

  • Published:

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-4471-7294-9

  • Online ISBN: 978-1-4471-7296-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics