Skip to main content
Log in

Hopfield Network as Static Optimizer: Learning the Weights and Eliminating the Guesswork

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

This article presents a simulation study for validation of an adaptation methodology for learning weights of a Hopfield neural network configured as a static optimizer. The quadratic Liapunov function associated with the Hopfield network dynamics is leveraged to map the set of constraints associated with a static optimization problem. This approach leads to a set of constraint-specific penalty or weighting coefficients whose values need to be defined. The methodology leverages a learning-based approach to define values of constraint weighting coefficients through adaptation. These values are in turn used to compute values of network weights, effectively eliminating the guesswork in defining weight values for a given static optimization problem, which has been a long-standing challenge in artificial neural networks. The simulation study is performed using the Traveling Salesman problem from the domain of combinatorial optimization. Simulation results indicate that the adaptation procedure is able to guide the Hopfield network towards solutions of the problem starting with random values for weights and constraint weighting coefficients. At the conclusion of the adaptation phase, the Hopfield network acquires weight values which readily position the network to search for local minimum solutions. The demonstrated successful application of the adaptation procedure eliminates the need to guess or predetermine the values for weights of the Hopfield network.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Smith K (1999) Neural networks for combinatorial optimization: a review of more than a decade of research. INFORMS J Comput 11(1):15–34

    Article  MATH  MathSciNet  Google Scholar 

  2. Cichocki A, Unbehauen R (1993) Neural networks for optimization and signal processing. John Wiley & B. G. Teubner, Stuttgart

    MATH  Google Scholar 

  3. Hopfield JJ, Tank DW (1985) Neural computation of decision in optimization problems. Biol Cyber 52:141–152

    MATH  MathSciNet  Google Scholar 

  4. Atencia M, Joya G (2005) Dynamical analysis of continuous higher-order Hopfield networks for combinatorial optimization. Neural Comput 17:1802–1819

    Article  MATH  MathSciNet  Google Scholar 

  5. Atencia M, Joya G, Sandoval F (2005) Two or three things that we (intend to) know about Hopfield and tank networks. European Symposium on Artificial Neural Networks, Bruges, Belgium, pp 25–36

    Google Scholar 

  6. Bournez O, Campagnolo ML (2006) A survey on continuous time computation. In: Cooper B (ed) New computational paradigms. Springer

  7. Smith KA, Potvin J-Y, Kwok T (2002) Neural network models for combinatorial optimization: deterministic, stochastic and chaotic approaches. Control Cyber 31(2):183–216

    MATH  Google Scholar 

  8. Sima J, Orponen P (2003) Continuous-time symmetric Hopfield nets are computationally universal. Neural Comput 15(3):693–733

    Article  Google Scholar 

  9. Sima J, Orponen P (2003) General-purpose computation with neural networks: a survey of complexity theoretic results. Neural Comput 15(12):2727–2778

    Article  MATH  Google Scholar 

  10. Sima J (2003) Energy-based computation with symmetric Hopfield nets. In: Ablameyko S, Gori M, Goras L, Piuri V (eds) Limitations and future trends in neural computation. NATO Science Series: Computer & Systems Sciences, vol 186, IOS Press, Amsterdam, 2003, pp 45–69

  11. Abe S (1989) Theories on the Hopfield neural networks. In: Proceedings of the IEE international joint conference on neural networks, IEEE, Washington, DC, vol 1, pp 557–564

  12. Abe S (1993) Global convergence and suppression of spurious states of the Hopfield neural networks. IEEE Trans Circ Syst-I 40(4):246–257

    Article  MATH  Google Scholar 

  13. Abe S (1996) Convergence acceleration of the Hopfield neural network by optimizing integration step sizes. IEEE Trans Syst Man Cyber B 26(1):194–201

    Article  Google Scholar 

  14. Serpen G, Livingston DL (2000) Determination of weights for relaxation recurrent neural networks. Neurocomputing 34:145–168

    Article  MATH  Google Scholar 

  15. Serpen G, Parvin A (1997) On the performance of Hopfield network for graph search problem. Neurocomputing 14:365–381

    Article  Google Scholar 

  16. Serpen G (2003) Adaptive Hopfield network. In: Proceedings of international conference on artificial neural networks. Lecture Notes in Compter Science, Springer-Verlag, Istanbul, Turkey, pp 1–7, 2003

  17. Moody JE (1992) The effective number of parameters: an analysis of generalization and regularization in nonlinear learning systems. In: Moody JE, Hanson SJ, Lippman RP (eds) Advances in neural information processing systems. Morgan Kaufmann Publishers, San Mateo

    Google Scholar 

  18. Nowlan SJ, Hinton GE (1992) Simplifying neural networks by soft weight-sharing. Neural Comput 4:473–493

    Article  Google Scholar 

  19. Lauthrup B, Hansen LK, Law I, Morch N, Svarer C, Strother SC (1994) Massive weight sharing: a cure for extremely ill-posed problems. Workshop on Supercomputing in Brain Research: From Tomography to Neural Networks, HLRZ, Julich, Germany, November 21–23, 1994

  20. Hinton GE, van Camp D (1993) Keeping neural networks simple by minimizing the description length of the weights. In: Proceedings of the sixth annual ACM conference on computational learning theory (COLT 1993), July 26–28, 1993, Santa Cruz, CA, USA

  21. Gallagher MR (1999) Multilayer perceptron error surfaces: visualization, structure and modelling. PhD Thesis, Department of Computer Science and Electrical Engineering, University of Queensland, Australia

  22. Serpen G (2004) Managing spatio-temporal complexity in Hopfield neural network simulations for large-scale static optimization. Math Comput Simulation 64:279–293

    Article  MATH  MathSciNet  Google Scholar 

  23. Serpen G, Livingston DL (1990) An adaptive constraint satisfaction network. In: Proceedings of ASILOMAR conference on signals, systems and circuits, Monterey, California, pp 163–167, November 1990

  24. Serpen G (2005) A heuristic and its mathematical analogue within artificial neural network adaptation context. Neural Netw World 15:129–136

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gursel Serpen.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Serpen, G. Hopfield Network as Static Optimizer: Learning the Weights and Eliminating the Guesswork. Neural Process Lett 27, 1–15 (2008). https://doi.org/10.1007/s11063-007-9055-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-007-9055-8

Keywords

Navigation