A Discrete Adaptive Stochastic Neural Model for Constrained Optimization

  • Giuliano Grossi
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4131)

Abstract

The ability to map and solve combinatorial optimization problems with constraints on neural networks has frequently motivated a proposal for using such a model of computation.

We introduce a new stochastic neural model, working out for a specific class of constraints, which is able to choose adaptively its weights in order to find solutions into a proper subspace (feasible region) of the search space.

We show its asymptotic convergence properties and give evidence of its ability to find hight quality solution on benchmark and randomly generated instances of a specific problem.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Smith, K.: Neural networks for combinatorial optimization: A review of more than a decade of research (1999)Google Scholar
  2. 2.
    Garey, M.R., Johnson, D.S.: Computers and Intractability. A Guide to the Theory of NP-Completeness. W. H. Freeman & Co., San Francisco (1979)MATHGoogle Scholar
  3. 3.
    Karp, R.M.: Complexity of Computer Computations. In: Reducibility among Combinatorial Problems, pp. 85–103. Plenum Press, New York (1972)Google Scholar
  4. 4.
    Boros, H.: Pseudo-boolean optimization. DAMATH: Discrete Applied Mathematics and Combinatorial Operations Research and Computer Science 123 (2002)Google Scholar
  5. 5.
    Bertoni, A., Campadelli, P., Grossi, G.: A neural algorithm for the maximum clique problem: Analysis, experiments and circuit implementation. Algoritmica 33(1), 71–88 (2002)MATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Hopfield, J.J.: Neurons with graded response have collective computational properties like those of two-state neurons. In: Proceedings of the National Academy of Sciences. NAS, vol. 81, pp. 3088–3092 (1984)Google Scholar
  7. 7.
    Glauber, R.J.: Time-dependent statistics of the Ising model. Journal of Mathematical Physics 4(2), 294–307 (1963)MATHCrossRefMathSciNetGoogle Scholar
  8. 8.
    Kirkpatrick, S., Gelatt, C., Vecchi, M.: Optimization by simulated annealing. Science 220, 671–680 (1983)CrossRefMathSciNetGoogle Scholar
  9. 9.
    Laarhoven, P.J.M., Aarts, E.H.L.: Simulated Annealing: Theory and Applications. Mathematics and its Applications. Reidel Publisching Company (1987)Google Scholar
  10. 10.
    Ackley, D., Hinton, G., Sejnowski, T.: A learning algorithm for boltzmann machines. Cognitive Science 9 (1985)Google Scholar
  11. 11.
    Aarts, E., Korst, J.: Simulated annealing and Boltzmann machines: a stochastic approach to combinatorial optimization and neural computing. John Wiley & Sons, Inc., New York (1989)MATHGoogle Scholar
  12. 12.
    Hertz, J., Krogh, A., Palmer, R.G.: Introduction to the Theory of Neural Computation. Addison-Wesley, Reading (1991)Google Scholar
  13. 13.
    Grossi, G., Posenato, R.: A distributed algorithm for max independent set problem based on Hopfield networks. In: Marinaro, M., Tagliaferri, R. (eds.) WIRN 2002. LNCS, vol. 2486, pp. 64–74. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  14. 14.
    Matula, D.: On the complete subgraph of a random graph. Combinatory Mathematics and its Applications, 356–369 (1970)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Giuliano Grossi
    • 1
  1. 1.Dipartimento di Scienze dell’InformazioneUniversità degli Studi di MilanoMilanoItaly

Personalised recommendations