Skip to main content
Log in

Convergence of higher-order two-state neural networks with modified updating

  • Published:
Sadhana Aims and scope Submit manuscript

Abstract

The Hopfield network is a standard tool for maximizing aquadratic objective function over the discrete set {− 1,1}n. It is well-known that if a Hopfield network is operated in anasynchronous mode, then the state vector of the network converges to a local maximum of the objective function; if the network is operated in asynchronous mode, then the state vector either converges to a local maximum, or else goes into a limit cycle of length two. In this paper, we examine the behaviour ofhigher-order neural networks, that is, networks used for maximizing objective functions that are not necessarily quadratic. It is shown that one can assume, without loss of generality, that the objective function to be maximized ismultilinear. Three methods are given for updating the state vector of the neural network, called the asynchronous, the best neighbour and the gradient rules, respectively. For Hopfield networks with a quadratic objective function, the asynchronous rule proposed here reduces to the standard asynchronous updating, while the gradient rule reduces to synchronous updating; the best neighbour rule does not appear to have been considered previously. It is shown that both the asynchronous updating rule and the best neighbour rule converge to a local maximum of the objective function within a finite number of time steps. Moreover, under certain conditions, under the best neighbour rule, each global maximum has a nonzero radius of direct attraction; in general, this may not be true of the asynchronous rule. However, the behaviour of the gradient updating rule is not well-understood. For this purpose, amodified gradient updating rule is presented, that incorporates bothtemporal as well as spatial correlations among the neurons. For the modified updating rule, it is shown that, after a finite number of time steps, the network state vector goes into a limit cycle of lengthm, wherem is the degree of the objective function. Ifm = 2, i.e., for quadratic objective functions, the modified updating rule reduces to the synchronous updating rule for Hopfield networks. Hence the results presented here are “true” generalizations of previously known results.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Hopfield J J 1982 Neural networks and physical systems with emergent collective computational capabilities.Proc. Natl. Acad. Sci. USA 79: 2554–2558

    Article  MathSciNet  Google Scholar 

  • Goles E, Fogelman E, Pellegrin 1985 Decreasing energy functions as a tool for studying threshold networks.Discuss. Appl. Math. 12: 261–277

    Article  MATH  Google Scholar 

  • Bruch J, Goodman J W 1988 A generalized convergence theorem for neural networks.IEEE Trans. Info. Theor 34: 1089–1092

    Article  Google Scholar 

  • Kamp Y, Hasler M 1990Recursive neural networks for associative memory (Chichester: John Wiley)

    MATH  Google Scholar 

  • Hopfield J J, Tank D W 1985 ‘Neural’ computation of decision optimization problems.Biol. Cybern. 52: 141–152

    MATH  MathSciNet  Google Scholar 

  • Masti C L, Vidyasagar M 1991 A stochastic high-order connectionist network for solving inferencing problems.Proc. Int. Joint. Conf. Neural Networks Singapore, pp. 911–916

  • Bruck J, Blum M 1989 ‘Neural networks, error-correcting codes, and polynomials over the binaryn-cube.IEEE Trans. Info. Theor. 35: 976–987

    Article  MATH  Google Scholar 

  • van Laarhoven P, Aarts E 1987Simulated anneling: Theory and applications (Dordrecht: Reidel)

    MATH  Google Scholar 

  • Vidyasagar M 1985Control system synthesis: A factorization approach (Cambridge, MA: MIT Press)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M Vidyasagar.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Vidyasagar, M. Convergence of higher-order two-state neural networks with modified updating. Sadhana 19, 239–255 (1994). https://doi.org/10.1007/BF02811897

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02811897

Keywords

Navigation