Advertisement

Dynamical System Approaches to Combinatorial Optimization

  • Jens Starke
  • Michael Schanz
Chapter

Abstract

This article describes and compares several dynamical system approaches to combinatorial optimization problems. These include penalty methods, the approach of Hopfield and Tank, self-organizing maps, i.e., Kohonen networks, coupled selection equations, and hybrid methods. Many of them are investigated analytically and the costs of the solutions are compared numerically with those of solutions obtained by simulated annealing and the costs of a global optimal solution. In order to get reproducible simulation results, a pseudo-random number generator with integer arithmetic is used to produce the data sets.

Using dynamical systems, a solution to the combinatorial optimization problem emerges in the limit of large times as an asymptotically stable point of the dynamics. These are often not global optimal solutions but good approximations of it. Dynamical system and neural network approaches are appropriate methods for distributed and parallel processing. Because of the parallelization, these techniques are able to compute a given task much faster than algorithms which are using a traditional sequentially working digital computer.

The analysis focuses on the linear two-dimensional (two-index) assignment problem and the NP-hard three-dimensional (three-index) assignment problem. These and other assignment problems can be used as models for many industrial problems like manufacturing planning and optimization of flexible manufacturing systems (FMS).

Keywords

Simulated Annealing Assignment Problem Travel Salesman Problem Travel Salesman Problem Combinatorial Optimization Problem 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Y. Abu-Mostafa and D. Psaltis. Optical neural computers. Scientific American, 256(3):66–73, 1987.CrossRefGoogle Scholar
  2. [2]
    H. Achatz, P. Kleinschmidt, and K. Paparrizos. A dual forest algorithm for the assignment problem. DIMACS Series in Discrete Mathematics and Theoretical Computer Science,4:1–10, 1991. The Victor Klee Festschrift.MathSciNetGoogle Scholar
  3. [3]
    S. Amari. Mathematical foundations of neurocomputing. In Proceedings of the IEEE, volume 78, pages 1443–1463. IEEE, 1990.Google Scholar
  4. [4]
    J. Anderson, A. Pellionisz, and E. Rosenfeld. Neurocomputing 2, Directions for Research. The MIT Press, Cambridge, Massachusetts, 1990.Google Scholar
  5. [5]
    J. Anderson and E. Rosenfeld. Neurocomputing, Foundations of Research. The MIT Press, Cambridge, Massachusetts, 1988.Google Scholar
  6. [6]
    B. Angèniol, G. De la Croix Vaubois, and J.-Y. Le Texier. Self-organizing feature maps and the travelling salesman problem. Neural Networks, 1:289–293, 1988.CrossRefGoogle Scholar
  7. [7]
    D. Anosov, I. Bronshtein, S. Aranson, and V. Grines. Smooth dynamical systems. In Dynamical Systems I, volume 1 of Encyclopaedia of Mathematical Sciences, pages 149–233. Springer-Verlag, Heidelberg, Berlin, New York, 1988.CrossRefGoogle Scholar
  8. [8]
    V. I. Arnol’d. Gewöhnliche Differentialgleichungen. Deutscher Verlag der Wissenschaften, Berlin, 1979, 1991.zbMATHGoogle Scholar
  9. [9]
    V. I. Arnol’d. Geometrische Methoden in der Theorie der gewöhnlichen Differentialgleichungen. Deutscher Verlag der Wissenschaften, Berlin, 1987.Google Scholar
  10. [10]
    V. I. Arnol’d and Yu. S. Il’yashenko. Ordinary differential equations. In D. Anosov and V. Arnol’d, editors, Dynamical Systems I, volume 1 of Encycolopaedia of Mathematical Sciences, pages 1–148. Springer-Verlag, Berlin, Heidelberg, New York, 1988.Google Scholar
  11. [11]
    M. Avriel. Nonlinear Programming - Analysis and Methods. Prentice-Hall, Englewood Cliffs, New Jersey, 1976.zbMATHGoogle Scholar
  12. [12]
    B. Baird. Bifurcation and category learning in network models of oscillating cortex. Physica D, 42:365–384, 1990.CrossRefGoogle Scholar
  13. [13]
    W. Banzhaf. The molecular traveling salesman. Biological Cybernetics, 64:7–14, 1990.CrossRefzbMATHGoogle Scholar
  14. [14]
    M. Bestehorn and H. Haken. Associative memory of a dynamical system: the example of the convection instability. Zeitschrift für Physik B, 82:305–308, 1991.CrossRefGoogle Scholar
  15. [15]
    K. Binder and A. Young. Spin glasses: Experimental facts, theoretical concepts, and open questions. Reviews of Modern Physics, 58(4):801–963, 1986.CrossRefGoogle Scholar
  16. [16]
    C. Guus E. Boender and H. Edwin Romeijn. Stochastic methods. In Handbook of Global Optimization, pages 829–869. 1995.Google Scholar
  17. [17]
    R. Brockett. Dynamical systems that sort lists, diagonalize matrices and solve linear programming problems. In Proceedings of the 27th Conference on Decision and Control, pages 799–803. IEEE, 1988.CrossRefGoogle Scholar
  18. [18]
    R. Brockett and W. Wong. A gradient flow for the assignment problem. In G. Conte, A. Perdon, and B. Wyman, editors, New Trends in System Theory, pages 170–177, Boston, Basel, Berlin, 1991. Birkhäuser.CrossRefGoogle Scholar
  19. [19]
    R. Burkard. Methoden der Ganzzahligen Optimierung. Springer-Verlag, Wien, New York, 1972.zbMATHGoogle Scholar
  20. [20]
    D. Cvijović and J. Klinowski. Taboo search: An approach to the multiple minima problem. Science, 267(3):664–666, 1995.MathSciNetGoogle Scholar
  21. [21]
    L. Davis. Handbook of Genetic Algorithms. Van Nostrand Reinhold, New York, 1991.Google Scholar
  22. [22]
    R. Durbin and D. Willshaw. An analogue approach to the travelling salesman problem using an elastic net method. Nature, 326:689–691, 1987.CrossRefGoogle Scholar
  23. [23]
    W. Ebeling Self-organization, valuation and optimization. In R. Mishra, D. Maaß, and E. Zwierlein, editors, On Self-Organization, volume 61 of Springer Series in Synergetics, pages 185–196. Springer-Verlag, Berlin, Heidelberg, 1994.Google Scholar
  24. [24]
    W. Ebeling, A. Engel, and R. Feistel. Physik der Evolutionsprozesse. Akademie-Verlag, Berlin, 1990.zbMATHGoogle Scholar
  25. [25]
    M. Eigen and P. Schuster. The hypercycle - part a: Emergence of the hypercycle. Die Naturwissenschaften, 64:541–565, 1977.CrossRefGoogle Scholar
  26. [26]
    M. Eigen and P. Schuster. The hypercycle - part b: The abstract hypercycle. Die Naturwissenschaften, 65:7–41, 1978.CrossRefGoogle Scholar
  27. [27]
    H. Eiselt, G. Pederzoli, and C.-L. Sandblom. Continuous Optimization Models - Operations Research. Walter de Gruyter, Berlin, New York, 1987.Google Scholar
  28. [28]
    J. Fort. Solving a combinatorial problem via self-organizing process: An application of the kohonen algorithm to the traveling salesman problem. Biolgical Cybernetics, 59:33–40, 1988.CrossRefzbMATHMathSciNetGoogle Scholar
  29. [29]
    M. Garey and D. Johnson. Computers and Intractability. Feeman and Company, San Francisco, 1979.zbMATHGoogle Scholar
  30. [30]
    A. Gee, S. Aiyer, and R. Prager. An analytical framework for optimizing neural networks. Neural Networks, 6:79–97, 1993.CrossRefGoogle Scholar
  31. [31]
    F. Glover. Tabu search - part i. ORSA Journal on Computing, 1:190–206, 1989.zbMATHMathSciNetGoogle Scholar
  32. [32]
    F. Glover. Tabu search - part ii. ORSA Journal on Computing, 2:4–32, 1989.MathSciNetGoogle Scholar
  33. [33]
    F. Glover, M. Laguna, E. Taillard, and D. de Werra. Tabu search. Annals of Operations Research, 41, 1993.Google Scholar
  34. [34]
    D. Goldberg. Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, Reading, Massachusetts, 1989.zbMATHGoogle Scholar
  35. [35]
    C. Großmann and J. Terno. Numerik der Optimierung. Teubner Studienbücher: Mathematik. Teubner, Stuttgart, 1993.zbMATHGoogle Scholar
  36. [36]
    M. Grötschel and L. Lovász. Combinatorial optimization. In Handbook of Combinatorics, chapter 28, pages 1541– 1597. 1995.Google Scholar
  37. [37]
    H. Haken. Pattern formation and pattern recognition - an attempt at a synthesis. In H. Haken, editor, Pattern Formation by Dynamic Systems and Pattern Recognition, volume 5 of Springer Series in Synergetics, pages 2–13. Springer-Verlag, Berlin, Heidelberg, 1979.CrossRefGoogle Scholar
  38. [38]
    H. Haken. Synergetic Computers and Cognition - A Top-Down Approach to Neural Nets. Springer Series in Synergetics. Springer-Verlag, Heidelberg, Berlin, New York, 1991.zbMATHGoogle Scholar
  39. [39]
    H. Haken. Principles of Brain Functioning - A Synergetic Approach to Brain Activity, Behavior and Cognition. Springer Series in Synergetics. Springer-Verlag, Berlin, Heidelberg, New York, 1996.zbMATHGoogle Scholar
  40. [40]
    H. Haken. Decision making and optimization in regional planning. unpublished, 1997.Google Scholar
  41. [41]
    J. Hertz, A. Krogh, and R. Palmer. Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Company, Redwood City, 1991.Google Scholar
  42. [42]
    M. Hestenes. Optimization Theory. John Wiley & Sons, New York, London, 1975.zbMATHGoogle Scholar
  43. [43]
    M. Hirsch and B. Baird. Computing with dynamic attractors in neural networks. BioSystems, 34:173–195, 1995.CrossRefGoogle Scholar
  44. [44]
    M. Hirsch and S. Smale. Differential Equations, Dynamical Systems, and Linear Algebra. Academic Press, New York, 1974.zbMATHGoogle Scholar
  45. [45]
    J. Hofbauer and K. Sigmund. The Theory of Evolution and Dynamical Systems. Number 7 in London Mathematical Society Student Texts. Cambridge University Press, 1988.zbMATHGoogle Scholar
  46. [46]
    J. Holland. Adaption in Natural and Artificial Systems. University of Michigan Press, Ann Arbor, 1975.zbMATHGoogle Scholar
  47. [47]
    J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. In Proceedings of the National Academy of Sciences [5], pages 2554–2558.Google Scholar
  48. [48]
    J. Hopfield. Neurons with graded response have collective computational properties like those of two-state neurons. In Proceedings of the National Academy of Sciences [5], pages 3088–3092.Google Scholar
  49. [49]
    J. Hopfield and D. Tank. Neural computation of decisions in optimization problems. Biological Cybernetics, 52:141–152, 1985.zbMATHMathSciNetGoogle Scholar
  50. [50]
    J. Hopfield and D. Tank. Computing with neural circuits: A model. Science, 233:625–633, 1986.CrossRefGoogle Scholar
  51. [51]
    R. Horst. Nichtlineare Optimierung. Carl Hanser Verlag, München, Wien, 1979.zbMATHGoogle Scholar
  52. [52]
    Behzad Kamgar-Parsi and Behrooz Kamgar-Parsi. On problem solving with hopfield neural networks. Biological Cybernetics, 62:415–423, 1990.CrossRefzbMATHMathSciNetGoogle Scholar
  53. [53]
    W. Kinzel. Spin glasses and memory. Physica Scripta, 35:398–401, 1987.CrossRefzbMATHMathSciNetGoogle Scholar
  54. [54]
    S. Kirkpatrick. Optimization by simulated annealing: Quantitative studies. Journal of Statistical Physics, 34 (5/6), 1984.Google Scholar
  55. [55]
    S. Kirkpatrick, C. Gelatt, and M. Vecchi. ptimization by simulated annealing. Science, 220 (4598), 1983.CrossRefMathSciNetGoogle Scholar
  56. [56]
    S. Kirkpatrick and G. Toulouse. Configuration space analysis of travelling salesman problems. Journal de Physique, 46, 1985.Google Scholar
  57. [57]
    T. Kohonen. Self-Organization and Associative Memory. Springer-Verlag, Berlin, Heidelberg, New York, 1984.zbMATHGoogle Scholar
  58. [58]
    T. Kohonen. Self-Organizing Maps. Springer-Verlag, Berlin, Heidelberg, New York, 1995.Google Scholar
  59. [59]
    N. Lidstrom, P. Pardalos, L. Pitsoulis, and G. Toraldo. An approximation algorithm for the three-index assignment problem. unpublished, 1996.Google Scholar
  60. [60]
    D. Luenberger. Introduction to Linear and Nonlinear Programming. Addison-Wesley Publishing Company, New York, London, 1973.zbMATHGoogle Scholar
  61. [61]
    S. Matsuda. Stability of solutions in hopfield neural network. Systems and Computers in Japan, 26(5):67–78, 1995. Translated from Vol. J77-D-II, No. 7, July 1994, pp. 1366–1374.CrossRefGoogle Scholar
  62. [62]
    S. Matsuda. Theoretical considerations on the capabilities of crossbar switching by hopfield networks. In Proceedings of the 1995 IEEE International Conference on Neural Networks, pages 1107–1110. IEEE, 1995.CrossRefGoogle Scholar
  63. [63]
    N. Metropolis, M. Rosenbluth, A. Teller, and E. Teller. Equation of state calculations by fast computing machines. The Journal of Chemical Physics, 21(6):1087–1092, 1953.CrossRefGoogle Scholar
  64. [64]
    Z. Michalewicz. Genetic Algorithms + Data Structures = Evolution Programs. Springer-Verlag, Berlin, Heidelberg, New York, 1992.zbMATHGoogle Scholar
  65. [65]
    B. Müller and J. Reinhardt. Neural Networks - An Introduction. Springer-Verlag, Berlin, Heidelberg, New York, 1991.Google Scholar
  66. [66]
    Y. Nesterov. Interior-point methods: An old and new approach to nonlinear programming. Mathematical Programming, 79:285–297, 1997.zbMATHMathSciNetGoogle Scholar
  67. [67]
    R. Neubecker, G.-L. Oppo, B. Thuering, and T. Tschudi. Pattern formation in a liquid-crystal light valve with feedback, including polarization, saturation, and internal threshold effects. Physical Review A, 52(1):791 –808, 1995.CrossRefGoogle Scholar
  68. [68]
    K. Pál. Genetic algorithms for the traveling salesman problem based on a heuristic crossover operation. Biological Cybernetics, 69:539–546, 1993.zbMATHGoogle Scholar
  69. [69]
    C. Papadimitriou and K. Steiglitz. Combinatorial Optimization - Algorithms and Complexity. Prentice-Hall, Englewood Cliffs, New Jersey, 1982.zbMATHGoogle Scholar
  70. [70]
    P. Peretto. Neural networks and combinatorial optimization. In Proceedings of the International Conference “Les Entretiens de Lyon”, pages 127–134, Paris, 1990. Springer-Verlag.Google Scholar
  71. [71]
    C. Peterson and B. Söderberg. Neural optimization. In M. Arbib, editor, Brain Theory and Neural Networks, pages 617–621. MIT Press, Cambridge, London, 1995.Google Scholar
  72. [72]
    W. Press, S. Teukolsky, W. Vetterling, and B. Flannery. Numerical Recipes in C. Cambridge University Press, Cambridge, New York, 1992.zbMATHGoogle Scholar
  73. [73]
    D. Psaltis, D. Brady, X.-G. Gu, and S. Lin. Holography in artificial neural networks. Nature, 343:325–330, 1990.CrossRefGoogle Scholar
  74. [74]
    I. Rechenberg. Evolutionsstrategie. Friedrich Frommann Verlag, Stuttgart Bad Cannstatt, 1973.Google Scholar
  75. [75]
    C. Robinson. Dynamical Systems - Stability, Symbolic Dynamics, and Chaos. CRC Press, Boca Raton, Ann Arbor, London, 1995.zbMATHGoogle Scholar
  76. [76]
    H.-P. Schwefel. Numerische Optimierung von Computer-Modellen mittels der Evolutionsstrategie. Birkhäuser-Verlag, Basel, Stuttgart, 1977.zbMATHGoogle Scholar
  77. [77]
    P. Spellucci. Numerische Verfahren der nichtlinearen Optimierung. Birkhäuser Verlag, Basel, Boston, Berlin, 1993.zbMATHGoogle Scholar
  78. [78]
    J. Starke. Cost oriented competing processes - a new handling of assignment problems. In J. Doležal and J. Fidler, editors, System Modelling and Optimization, pages 551–558. Chapman & Hall, London Glasgow, 1996.Google Scholar
  79. [79]
    J. Starke. Kombinatorische Optimierung auf der Basis gekoppelter Selektionsgleichungen. PhD thesis, Universität Stuttgart, Verlag Shaker, Aachen, 1997.Google Scholar
  80. [80]
    J. Starke and M. Hirsch. Solving assignment problems with a piecewise continuous dynamical system. unpublished, 1997.Google Scholar
  81. [81]
    J. Starke, M. Schanz, and H. Haken. Self-organized behaviour of distributed autonomous mobile robotic systems by pattern formation principles. In Proceedings of Distributed Autonomous Robotic Systems (DABS ’88). Springer Verlag, Heidelberg, Berlin, New York, 1998. to appear.Google Scholar
  82. [82]
    D. Tank and J. Hopfield. Simple neural optimization networks: An a/d converter, signal decision circuit and a linear programming circuit. IEEE Transactions on Circuits and Systems, CAS-33(5):533–541, 1986.Google Scholar
  83. [83]
    Y. Uesaka. Mathematical aspects of neuro-dynamics for combinatorial optimization. IEICE Transactions, E 74(6):1368–1372, 1991.Google Scholar
  84. [84]
    K. Urahama. Analog circuit for solving assignment problems. IEEE Transactions on Circuits and Systems, 41(5):426–429, 1994.CrossRefzbMATHGoogle Scholar
  85. [85]
    D. Van den Bout and T. Miller. A traveling salesman objective function that works. In Proceedings of the IEEE International Conference on Neural Networks 1988, volume II, pages II-299–II-303. IEEE, 1988.Google Scholar
  86. [86]
    D. Van den Bout and T. Miller III. Improving the performance of the hopfield-tank neural network through normalization and annealing. Biological Cybernetics, 62:129–139, 1989.CrossRefMathSciNetGoogle Scholar
  87. [87]
    P. van Laarhoven and E. Aarts. Simulated Annealing: Theory and Applications. Reidel Publishing Company, Dordrecht, Boston, Lancaster, Tokyo, 1987.zbMATHGoogle Scholar
  88. [88]
    S. Wiggins. Introduction to Applied Nonlinear Dynamical Systems and Chaos. Springer-Verlag, Berlin, Heidelberg, New York, 1990.zbMATHGoogle Scholar
  89. [89]
    G. Wilson and G. Pawley. On the stability of the travelling salesman problem algorithm of hopfield and tank. Biological Cybernetics, 58:63–70, 1988.CrossRefzbMATHMathSciNetGoogle Scholar
  90. [90]
    W. Wong. Matrix representation and gradient flows for np-hard problems. Journal of Optimization Theory and Applications, 87(1):197–220, 1995.CrossRefzbMATHMathSciNetGoogle Scholar
  91. [91]
    A. Yuille. Constrained optimization and the elastic net. In M. Arbib, editor, Brain Theory and Neural Networks, pages 250–255. MIT Press, Cambridge, London, 1995.Google Scholar

Copyright information

© Kluwer Academic Publishers 1998

Authors and Affiliations

  • Jens Starke
    • 1
  • Michael Schanz
    • 2
  1. 1.Institute of Applied MathematicsUniversity of HeidelbergGermany
  2. 2.Institute of Parallel and Distributed High-Performance SystemsUniversity of StuttgartGermany

Personalised recommendations