Advertisement

Neural Processing Letters

, Volume 1, Issue 2, pp 14–17 | Cite as

Equivalence between some dynamical systems for optimization

  • Kiichi Urahama
Article

Abstract

It is shown by the derivation of solution methods for an elementary optimization problem that the stochastic relaxation in image analysis, the Potts neural networks for combinatorial optimization and interior point methods for nonlinear programming have common formulation of their dynamics. This unification of these algorithms leads us to possibility for real time solution of these problems with common analog electronic circuits.

Keywords

Neural Network Dynamical System Image Analysis Artificial Intelligence Complex System 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    R.A. Hummel, S.W. Zucker. On the foundations of relaxation labeling processes, IEEE Trans. Patt. Anal. Mac. Intelli., PAMI-5, pp.267–286, 1983.Google Scholar
  2. [2]
    J. Hopfield, D. Tank. Neural computation of decision in optimization problems,Biol. Cybern., 52, pp. 141–152, 1985.MathSciNetGoogle Scholar
  3. [3]
    L.E. Faybusovich. Interior point methods and entropy,1991 ICDC, pp.2094–2095, 1991.Google Scholar
  4. [4]
    A. Yuille, D. Kosowsky. Statistical physics algorithms that converge,Neural Comp., 6, pp.341–356, 1994.Google Scholar
  5. [5]
    U. Helmke, J.B. Moore. Optimization and dynamical systems, Springer-Verlag, 1994.Google Scholar
  6. [6]
    L.E. Baum, G.R. Sell. Growth transformation for functions on manifolds,Pac. J. Math., 27, pp.211–227, 1968.MathSciNetGoogle Scholar
  7. [7]
    S.E. Levinson, L.R. Rabiner, M.M. Sondhi. An introduction to the application of the theory of probabilistic functions of Markov process to automatic speech recognition,Bell Syst. Tech. J., 62, pp. 1035–1074, 1983.MathSciNetGoogle Scholar
  8. [8]
    K. Urahama, S. Ueno. A gradient system solution to Potts mean field equations and its electronic implementation,Int. J. Neural Syst., 4, pp.27–34, 1993.CrossRefGoogle Scholar
  9. [9]
    K. Urahama. Performance of neural algorithms for maximum-cut problems,J. Circuit, Syst. Comput., 2, pp.389–395, 1992.Google Scholar
  10. [10]
    M. Thathachar, P.S. Sastry. Relaxation labeling with learning automata,IEEE Trans. Patt. Anal. Mach. Intelli., PAMI-8, pp.256–267, 1986.Google Scholar
  11. [11]
    E. Akin.The geometry of population genetics, Springer-Verlag, 1979.Google Scholar
  12. [12]
    J. Aitchison. The statistical analysis of compositional data,J. Roy. Stat. Soc., B44, pp.139–177, 1982.MathSciNetGoogle Scholar
  13. [13]
    C. Peterson, B. Soderberg. A new method for mapping optimization problems onto neural networks,Int. J. Neural Syst., 1, pp.3–22, 1989.CrossRefGoogle Scholar
  14. [14]
    C. Mead.Analog VLSI and neural systems, Addison-Wesley, 1989.Google Scholar
  15. [15]
    K. Urahama. Analog method for solving combinatorial optimization problems,IEICE Trans. Fundamentals, E77-A, pp.302–308, 1994.Google Scholar

Copyright information

© Kluwer Academic Publishers 1994

Authors and Affiliations

  • Kiichi Urahama
    • 1
  1. 1.Department of Computer Science and ElectronicsKyushu Institute of TechnologyIizuka-shiJapan

Personalised recommendations