Skip to main content

Hopfield Networks, Simulated Annealing, and Chaotic Neural Networks

  • Chapter
  • First Online:
Neural Networks and Statistical Learning

Abstract

Hopfield model is the most popular dynamic model. Simulated annealing, inspired by annealing in metallurgy, is a metaheuristic to approximate global optimization in a large search space. The annealing concept is widely used in the training of recurrent neural networks. Chaotic neural networks are recurrent neural networks introduced with chaotic dynamics. The cellular network is a generalization of the Hopfield network to a two- or higher dimensional array of cells. This chapter is dedicated to these topics. They are widely used for solving combinatorial optimization problems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Aarts, E., & Korst, J. (1989). Simulated annealing and Boltzmann machines. Chichester: John Wiley.

    MATH  Google Scholar 

  2. Abe, S., Kawakami, J., & Hirasawa, K. (1992). Solving inequality constrained combinatorial optimization problems by the Hopfield neural networks. Neural Networks, 5, 663–670.

    Article  Google Scholar 

  3. Adachi, M., & Aihara, K. (1997). Associative dynamics in a chaotic neural network. Neural Networks, 10(1), 83–98.

    Article  Google Scholar 

  4. Aihara, K., Takabe, T., & Toyoda, M. (1990). Chaotic neural networks. Physics Letters A, 144, 333–340.

    Article  MathSciNet  Google Scholar 

  5. Aomori, H., Otake, T., Takahashi, N., & Tanaka, M. (2008). Sigma-delta cellular neural network for 2D modulation. Neural Networks, 21, 349–357.

    Article  Google Scholar 

  6. Azencott, R. (1992). Simulated annealing: Parallelization techniques. New York: Wiley.

    MATH  Google Scholar 

  7. Bruck, J. (1990). On the convergence properties of the Hopfield model. Proceedings of the IEEE, 78(10), 1579–1585.

    Article  Google Scholar 

  8. Chakraborty, K., Mehrotra, K. G., Mohan, C. K., & Ranka, S. (1992). An optimization network for solving a set of simultaneous linear equations. In Proceedings of the International Joint Conference on Neural Networks (Vol. 2, pp. 516–521). Baltimore, MD.

    Google Scholar 

  9. Chen, L., & Aihara, K. (1995). Chaotic simulated annealing by a neural-network model with transient chaos. Neural Networks, 8(6), 915–930.

    Article  Google Scholar 

  10. Chen, L., & Aihara, K. (1997). Chaos and asymptotical stability in discrete-time neural networks. Physica D, 104, 286–325.

    Article  MathSciNet  MATH  Google Scholar 

  11. Chen, L., & Aihara, K. (1999). Global searching ability of chaotic neural networks. IEEE Transactions on Circuits and Systems I, 48(8), 974–993.

    Google Scholar 

  12. Chiueh, T. D., & Goodman, R. M. (1991). Recurrent correlation associative memories. IEEE Transactions on Neural Networks, 2(2), 275–284.

    Article  Google Scholar 

  13. Chua, L. O., & Yang, L. (1988). Cellular neural network—Part I: Theory; Part II: Applications. IEEE Transactions on Circuits and Systems, 35, 1257–1290.

    Article  MathSciNet  MATH  Google Scholar 

  14. Chua, L. O., & Roska, T. (2002). Cellular neural network and visual computing—Foundation and applications. Cambridge, UK: Cambridge University Press.

    Book  Google Scholar 

  15. Cichocki, A., & Unbehauen, R. (1993). Neural networks for optimization and signal processing. New York: Wiley.

    MATH  Google Scholar 

  16. Cohen, M. A., & Grossberg, S. (1983). Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Transactions on Systems, Man, and Cybernetics, 13, 815–826.

    Google Scholar 

  17. Culhane, A. D., Peckerar, M. C., & Marrian, C. R. K. (1989). A neural net approach to discrete Hartley and Fourier transforms. IEEE Transactions on Circuits and Systems, 36(5), 695–702.

    Article  MathSciNet  Google Scholar 

  18. Czech, Z. J. (2001). Three parallel algorithms for simulated annealing. In: R. Wyrzykowski, J. Dongarra, M. Paprzycki, & J. Waniewski (Eds.), Proceedings of the 4th International Conference on Parallel Processing and Applied Mathematics, LNCS (Vol. 2328, pp. 210–217). Naczow, Poland; London: Springer.

    Google Scholar 

  19. Dang, C., & Xu, L. (2001). A Lagrange multiplier and Hopfield-type barrier function method for the traveling salesman problem. Neural Computation, 14, 303–324.

    Article  MATH  Google Scholar 

  20. Demirciler, K., & Ortega, A. (2005). Reduced-complexity deterministic annealing for vector quantizer design. EURASIP Journal on Applied Signal Processing, 2005(12), 1807–1820.

    Google Scholar 

  21. Fleisher, M. (1988). The Hopfield model with multi-level neurons. In D. Z. Anderson (Ed.), Neural information processing systems (pp. 278–289). New York: American Institute Physics.

    Google Scholar 

  22. Gao, K., Ahmad, M. O., & Swamy, M. N. S. (1990). A neural network least-square estimator. In Proceedings of the International Joint Conference on Neural Networks (Vol. 3, pp. 805–810). Washington, DC.

    Google Scholar 

  23. Gao, X.-B., & Liao, L.-Z. (2006). A novel neural network for a class of convex quadratic minimax problems. Neural Computation, 18, 1818–1846.

    Article  MathSciNet  MATH  Google Scholar 

  24. Geman, S., & Geman, D. (1984). Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6, 721–741.

    Google Scholar 

  25. Hajek, B. (1988). Cooling schedules for optimal annealing. Mathematical Operations Research, 13(2), 311–329.

    Article  MathSciNet  MATH  Google Scholar 

  26. He, Y. (2002). Chaotic simulated annealing with decaying chaotic noise. IEEE Transactions on Neural Networks, 13(6), 1526–1531.

    Article  Google Scholar 

  27. Hopfield, J. J. (1982). Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences of the United States of America, 79, 2554–2558.

    Article  MathSciNet  MATH  Google Scholar 

  28. Hopfield, J. J. (1984). Neurons with graded response have collective computational properties like those of two-state neurons. Proceedings of the National Academy of Sciences of the United States of America, 81, 3088–3092.

    Article  MATH  Google Scholar 

  29. Hopfield, J. J., & Tank, D. W. (1985). Neural computation of decisions in optimization problems. Biological Cybernetics, 52, 141–152.

    MathSciNet  MATH  Google Scholar 

  30. Ingber, L. (1989). Very fast simulated re-annealing. Mathematical and Computer Modelling, 12(8), 967–973.

    Article  MathSciNet  MATH  Google Scholar 

  31. Jang, J. S., Lee, S. Y., & Shin, S. Y. (1988). An optimization network for matrix inversion. In D. Z. Anderson (Ed.), Neural information processing systems (pp. 397–401). New York: American Institute Physics.

    Google Scholar 

  32. Jankowski, S., Lozowski, A., & Zurada, J. M. (1996). Complex-valued multi-state neural associative memory. IEEE Transactions on Neural Networks, 7(6), 1491–1496.

    Google Scholar 

  33. Kennedy, M. P., & Chua, L. O. (1988). Neural networks for nonlinear programming. IEEE Transactions on Circuits and Systems, 35, 554–562.

    Article  MathSciNet  Google Scholar 

  34. Kirkpatrick, S., Gelatt, C. D, Jr., & Vecchi, M. P. (1983). Optimization by simulated annealing. Science, 220, 671–680.

    Article  MathSciNet  MATH  Google Scholar 

  35. Kwok, T., & Smith, K. A. (1999). A unified framework for chaotic neural-network approaches to combinatorial optimization. IEEE Transactions on Neural Networks, 10(4), 978–981.

    Article  Google Scholar 

  36. Lee, B. W., & Shen, B. J. (1992). Design and analysis of analog VLSI neural networks. In B. Kosko (Ed.), Neural networks for signal processing (pp. 229–284). Englewood Cliffs, NJ: Prentice-Hall.

    Google Scholar 

  37. Lee, R. S. T. (2006). Lee-Associator: A chaotic auto-associative network for progressive memory recalling. Neural Networks, 19, 644–666.

    Article  MATH  Google Scholar 

  38. Le Gall, A., & Zissimopoulos, V. (1999). Extended Hopfield models for combinatorial optimization. IEEE Transactions on Neural Networks, 10(1), 72–80.

    Article  Google Scholar 

  39. Lendaris, G. G., Mathia, K., & Saeks, R. (1999). Linear Hopfield networks and constrained optimization. IEEE Transactions on Systems, Man, and Cybernetics, Part B, 29(1), 114–118.

    Article  Google Scholar 

  40. Li, J. H., Michel, A. N., & Parod, W. (1989). Analysis and synthesis of a class of neural networks: Linear systems operating on a closed hypercube. IEEE Transactions on Circuits and Systems, 36(11), 1405–1422.

    Google Scholar 

  41. Little, W. A. (1974). The existence of persistent states in the brain. Mathematical Biosciences, 19, 101–120.

    Article  MATH  Google Scholar 

  42. Liu, Y., & You, Z. (2008). Stability analysis for the generalized Hopfield neural networks with multi-level activation functions. Neurocomputing, 71, 3595–3601.

    Google Scholar 

  43. Locatelli, M. (2001). Convergence and first hitting time of simulated annealing algorithms for continuous global optimization. Mathematical Methods of Operations Research, 54, 171–199.

    Article  MathSciNet  MATH  Google Scholar 

  44. Matsuda, S. (1998). “Optimal” Hopfield network for combinatorial optimization with linear cost function. IEEE Transactions on Neural Networks, 9(6), 1319–1330.

    Article  Google Scholar 

  45. Metropolis, N., Rosenbluth, A., Rosenbluth, M., Teller, A., & Teller, E. (1953). Equations of state calculations by fast computing machines. Journal of Chemical Physics, 21(6), 1087–1092.

    Google Scholar 

  46. Muezzinoglu, M. K., Guzelis, C., & Zurada, J. M. (2003). A new design method for the complex-valued multistate Hopfield associative memory. IEEE Transactions on Neural Networks, 14(4), 891–899.

    Google Scholar 

  47. Nam, D. K., & Park, C. H. (2000). Multiobjective simulated annealing: A comparative study to evolutionary algorithms. International Journal of Fuzzy Systems, 2(2), 87–97.

    Google Scholar 

  48. Nozawa, H. (1992). A neural network model as a globally coupled map and applications based on chaos. Chaos, 2(3), 377–386.

    Article  MathSciNet  MATH  Google Scholar 

  49. Richardt, J., Karl, F., & Muller, C. (1998). Connections between fuzzy theory, simulated annealing, and convex duality. Fuzzy Sets and Systems, 96, 307–334.

    Google Scholar 

  50. Rose, K., Gurewitz, E., & Fox, G. C. (1990). A deterministic annealing approach to clustering. Pattern Recognition Letters, 11(9), 589–594.

    Google Scholar 

  51. Rose, K. (1998). Deterministic annealing for clustering, compression, classification, regression, and related optimization problems. Proceedings of the IEEE, 86(11), 2210–2239.

    Article  Google Scholar 

  52. Roska, T., & Chua, L. O. (1993). The CNN universal machine: An analogic array computer. IEEE Transactions on Circuits and Systems II, 40(3), 163–173.

    Article  MATH  Google Scholar 

  53. Si, J., & Michel, A. N. (1995). Analysis and synthesis of a class of discrete-time neural networks with multilevel threshold neurons. IEEE Transactions on Neural Networks, 6(1), 105–116.

    Google Scholar 

  54. Sima, J., Orponen, P., & Antti-Poika, T. (2000). On the computational complexity of binary and analog symmetric Hopfield nets. Neural Computation, 12, 2965–2989.

    Article  Google Scholar 

  55. Sima, J., & Orponen, P. (2003). Continuous-time symmetric Hopfield nets are computationally universal. Neural Computation, 15, 693–733.

    Article  MATH  Google Scholar 

  56. Smith, K. I., Everson, R. M., Fieldsend, J. E., Murphy, C., & Misra, R. (2008). Dominance-based multiobjective simulated annealing. IEEE Transactions on Evolutionary Computation, 12(3), 323–342.

    Article  Google Scholar 

  57. Szu, H. H., & Hartley, R. L. (1987). Nonconvex optimization by fast simulated annealing. Proceedings of the IEEE, 75, 1538–1540.

    Google Scholar 

  58. Tank, D. W., & Hopfield, J. J. (1986). Simple “neural” optimization networks: An A/D converter, signal decision circuit, and a linear programming circuit. IEEE Transactions on Circuits and Systems, 33, 533–541.

    Article  Google Scholar 

  59. Thompson, D. R., & Bilbro, G. L. (2005). Sample-sort simulated annealing. IEEE Transactions on Systems, Man, and Cybernetics, Part B, 35(3), 625–632.

    Article  Google Scholar 

  60. Tsallis, C., & Stariolo, D. A. (1996). Generalized simulated annealing. Physica A, 233, 395–406.

    Article  Google Scholar 

  61. Wang, J., & Li, H. (1994). Solving simultaneous linear equations using recurrent neural networks. Information Sciences, 76, 255–277.

    Article  MATH  Google Scholar 

  62. Wang, L., & Smith, K. (1998). On chaotic simulated annealing. IEEE Transactions on Neural Networks, 9, 716–718.

    Article  Google Scholar 

  63. Wang, L., Li, S., Tian, F., & Fu, X. (2004). A noisy chaotic neural network for solving combinatorial optimization problems: Stochastic chaotic simulated annealing. IEEE Transactions on Systems, Man, and Cybernetics, Part B, 34(5), 2119–2125.

    Article  Google Scholar 

  64. Wang, R. L., Tang, Z., & Cao, Q. P. (2002). A learning method in Hopfield neural network for combinatorial optimization problem. Neurocomputing, 48, 1021–1024.

    Article  MATH  Google Scholar 

  65. Wang, X. (1991). Period-doublings to chaos in a simple neural network: An analytic proof. Complex Systems, 5, 425–441.

    MathSciNet  MATH  Google Scholar 

  66. Yan, H. (1991). Stability and relaxation time of Tank and Hopfield’s neural network for solving LSE problems. IEEE Transactions on Circuits and Systems, 38(9), 1108–1110.

    Article  MathSciNet  Google Scholar 

  67. Yuh, J. D., & Newcomb, R. W. (1993). A multilevel neural network for A/D conversion. IEEE Transactions on Neural Networks, 4(3), 470–483.

    Article  Google Scholar 

  68. Zhang, S., & Constantinides, A. G. (1992). Lagrange programming neural networks. IEEE Transactions on Circuits and Systems II, 39(7), 441–452.

    Google Scholar 

  69. Zurada, J. M., Cloete, I., & van der Poel, E. (1996). Generalized Hopfield networks for associative memories with multi-valued stable states. Neurocomputing, 13, 135–149.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ke-Lin Du .

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer-Verlag London Ltd., part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Du, KL., Swamy, M.N.S. (2019). Hopfield Networks, Simulated Annealing, and Chaotic Neural Networks. In: Neural Networks and Statistical Learning. Springer, London. https://doi.org/10.1007/978-1-4471-7452-3_7

Download citation

Publish with us

Policies and ethics