Abstract
The Hopfield network is a standard tool for maximizing aquadratic objective function over the discrete set {− 1,1}n. It is well-known that if a Hopfield network is operated in anasynchronous mode, then the state vector of the network converges to a local maximum of the objective function; if the network is operated in asynchronous mode, then the state vector either converges to a local maximum, or else goes into a limit cycle of length two. In this paper, we examine the behaviour ofhigher-order neural networks, that is, networks used for maximizing objective functions that are not necessarily quadratic. It is shown that one can assume, without loss of generality, that the objective function to be maximized ismultilinear. Three methods are given for updating the state vector of the neural network, called the asynchronous, the best neighbour and the gradient rules, respectively. For Hopfield networks with a quadratic objective function, the asynchronous rule proposed here reduces to the standard asynchronous updating, while the gradient rule reduces to synchronous updating; the best neighbour rule does not appear to have been considered previously. It is shown that both the asynchronous updating rule and the best neighbour rule converge to a local maximum of the objective function within a finite number of time steps. Moreover, under certain conditions, under the best neighbour rule, each global maximum has a nonzero radius of direct attraction; in general, this may not be true of the asynchronous rule. However, the behaviour of the gradient updating rule is not well-understood. For this purpose, amodified gradient updating rule is presented, that incorporates bothtemporal as well as spatial correlations among the neurons. For the modified updating rule, it is shown that, after a finite number of time steps, the network state vector goes into a limit cycle of lengthm, wherem is the degree of the objective function. Ifm = 2, i.e., for quadratic objective functions, the modified updating rule reduces to the synchronous updating rule for Hopfield networks. Hence the results presented here are “true” generalizations of previously known results.
Similar content being viewed by others
References
Hopfield J J 1982 Neural networks and physical systems with emergent collective computational capabilities.Proc. Natl. Acad. Sci. USA 79: 2554–2558
Goles E, Fogelman E, Pellegrin 1985 Decreasing energy functions as a tool for studying threshold networks.Discuss. Appl. Math. 12: 261–277
Bruch J, Goodman J W 1988 A generalized convergence theorem for neural networks.IEEE Trans. Info. Theor 34: 1089–1092
Kamp Y, Hasler M 1990Recursive neural networks for associative memory (Chichester: John Wiley)
Hopfield J J, Tank D W 1985 ‘Neural’ computation of decision optimization problems.Biol. Cybern. 52: 141–152
Masti C L, Vidyasagar M 1991 A stochastic high-order connectionist network for solving inferencing problems.Proc. Int. Joint. Conf. Neural Networks Singapore, pp. 911–916
Bruck J, Blum M 1989 ‘Neural networks, error-correcting codes, and polynomials over the binaryn-cube.IEEE Trans. Info. Theor. 35: 976–987
van Laarhoven P, Aarts E 1987Simulated anneling: Theory and applications (Dordrecht: Reidel)
Vidyasagar M 1985Control system synthesis: A factorization approach (Cambridge, MA: MIT Press)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Vidyasagar, M. Convergence of higher-order two-state neural networks with modified updating. Sadhana 19, 239–255 (1994). https://doi.org/10.1007/BF02811897
Received:
Revised:
Issue Date:
DOI: https://doi.org/10.1007/BF02811897