Hybrid Training of Feed-Forward Neural Networks with Particle Swarm Optimization
Training neural networks is a complex task of great importance in problems of supervised learning. The Particle Swarm Optimization (PSO) consists of a stochastic global search originated from the attempt to graphically simulate the social behavior of a flock of birds looking for resources. In this work we analyze the use of the PSO algorithm and two variants with a local search operator for neural network training and investigate the influence of the GL 5 stop criteria in generalization control for swarm optimizers. For evaluating these algorithms we apply them to benchmark classification problems of the medical field. The results showed that the hybrid GCPSO with local search operator had the best results among the particle swarm optimizers in two of the three tested problems.
KeywordsParticle Swarm Optimization Particle Swarm Particle Swarm Optimization Algorithm Stop Criterion Standard Particle Swarm Optimization
Unable to display preview. Download preview PDF.
- 1.Blum, C., Socha, K.: Training feed-forward neural networks with ant colony optimization: An application to pattern classification. In: Fifth International Conference on Hybrid Intelligent Systems (HIS 2005), pp. 233–238 (2005)Google Scholar
- 2.Marquardt, D.: An algorithm for least squares estimation of non-linear parameters. J. Soc. Ind. Appl. Math., 431–441 (1963)Google Scholar
- 3.Rumelhart, D., Hilton, G.E., Williams, R.J.: Learning representations of back-propagation errors. Nature 323, 523–536Google Scholar
- 6.van den Bergh, F.: An Analysis of Particle Swarm Optimizers. PhD dissertation, Faculty of Natural and Agricultural Sciences, Univ. Pretoria, Pretoria, South Africa (2002)Google Scholar
- 9.Holland, J.H.: Adaptation in natural and artificial systems, University of Michigan Press, Ann Arbor, MI (1975)Google Scholar
- 10.Kennedy, J., Eberhart, R.: Swarm Intelligence. Morgan Kaufmann, San Francisco (2001)Google Scholar
- 13.Prechelt, L.: Proben1 - A set of neural network benchmark problems and benchmark rules. Technical Report 21/94, Fakultät für Informatik, Universität Karlsruhe, Germany (September 1994)Google Scholar
- 15.Riedmiller, M.: Rprop - description and implementations details, Technical report, University of Karlsruhe (1994)Google Scholar
- 19.Sexton, R.S., Dorsey, R.E., Johnson, J.D.: Optimization of neural networks: A comparative analysis of the genetic algorithm and simulated annealing. European Journal of Operational Research (114), 589–601 (1999)Google Scholar
- 20.Haykin, S.: Neural Networks: A comprehensive Foundation, 2nd edn. Prentice-Hall, Englewood Cliffs (1998)Google Scholar
- 22.Ludermir, T.B., Yamazaki, A., Zanchetin, C.: An Optimization Methodology for Neural Network Weights and Architectures. IEEE Transactions on Neural Networks 17(5) (to be published, 2006)Google Scholar