A Discrete Adaptive Stochastic Neural Model for Constrained Optimization
The ability to map and solve combinatorial optimization problems with constraints on neural networks has frequently motivated a proposal for using such a model of computation.
We introduce a new stochastic neural model, working out for a specific class of constraints, which is able to choose adaptively its weights in order to find solutions into a proper subspace (feasible region) of the search space.
We show its asymptotic convergence properties and give evidence of its ability to find hight quality solution on benchmark and randomly generated instances of a specific problem.
Unable to display preview. Download preview PDF.
- 1.Smith, K.: Neural networks for combinatorial optimization: A review of more than a decade of research (1999)Google Scholar
- 3.Karp, R.M.: Complexity of Computer Computations. In: Reducibility among Combinatorial Problems, pp. 85–103. Plenum Press, New York (1972)Google Scholar
- 4.Boros, H.: Pseudo-boolean optimization. DAMATH: Discrete Applied Mathematics and Combinatorial Operations Research and Computer Science 123 (2002)Google Scholar
- 6.Hopfield, J.J.: Neurons with graded response have collective computational properties like those of two-state neurons. In: Proceedings of the National Academy of Sciences. NAS, vol. 81, pp. 3088–3092 (1984)Google Scholar
- 9.Laarhoven, P.J.M., Aarts, E.H.L.: Simulated Annealing: Theory and Applications. Mathematics and its Applications. Reidel Publisching Company (1987)Google Scholar
- 10.Ackley, D., Hinton, G., Sejnowski, T.: A learning algorithm for boltzmann machines. Cognitive Science 9 (1985)Google Scholar
- 12.Hertz, J., Krogh, A., Palmer, R.G.: Introduction to the Theory of Neural Computation. Addison-Wesley, Reading (1991)Google Scholar
- 14.Matula, D.: On the complete subgraph of a random graph. Combinatory Mathematics and its Applications, 356–369 (1970)Google Scholar