Hybrid Training of Feed-Forward Neural Networks with Particle Swarm Optimization

  • M. Carvalho
  • T. B. Ludermir
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4233)


Training neural networks is a complex task of great importance in problems of supervised learning. The Particle Swarm Optimization (PSO) consists of a stochastic global search originated from the attempt to graphically simulate the social behavior of a flock of birds looking for resources. In this work we analyze the use of the PSO algorithm and two variants with a local search operator for neural network training and investigate the influence of the GL 5 stop criteria in generalization control for swarm optimizers. For evaluating these algorithms we apply them to benchmark classification problems of the medical field. The results showed that the hybrid GCPSO with local search operator had the best results among the particle swarm optimizers in two of the three tested problems.


Particle Swarm Optimization Particle Swarm Particle Swarm Optimization Algorithm Stop Criterion Standard Particle Swarm Optimization 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Blum, C., Socha, K.: Training feed-forward neural networks with ant colony optimization: An application to pattern classification. In: Fifth International Conference on Hybrid Intelligent Systems (HIS 2005), pp. 233–238 (2005)Google Scholar
  2. 2.
    Marquardt, D.: An algorithm for least squares estimation of non-linear parameters. J. Soc. Ind. Appl. Math., 431–441 (1963)Google Scholar
  3. 3.
    Rumelhart, D., Hilton, G.E., Williams, R.J.: Learning representations of back-propagation errors. Nature 323, 523–536Google Scholar
  4. 4.
    Alba, E., Chicano, J.F.: Training Neural Networks with GA Hybrid Algorithms. In: Deb, K., et al. (eds.) GECCO 2004. LNCS, vol. 3102, pp. 852–863. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  5. 5.
    Eiben, E., Smith, J.E.: Introduction to Evolutionary Computing. Natural Computing Series. MIT Press, Springer, Berlin (2003)MATHGoogle Scholar
  6. 6.
    van den Bergh, F.: An Analysis of Particle Swarm Optimizers. PhD dissertation, Faculty of Natural and Agricultural Sciences, Univ. Pretoria, Pretoria, South Africa (2002)Google Scholar
  7. 7.
    van den Bergh, F., Engelbrecht, A.P.: A Cooperative Approach to Particle Swarm Optimization. IEEE Transactions on Evolutionary Computation 8(3), 225–239 (2004)CrossRefGoogle Scholar
  8. 8.
    Glover, F.: Future paths for integer programming and links to artificial intelligence. Computers and Operation Research 13, 533–549 (1986)MATHCrossRefMathSciNetGoogle Scholar
  9. 9.
    Holland, J.H.: Adaptation in natural and artificial systems, University of Michigan Press, Ann Arbor, MI (1975)Google Scholar
  10. 10.
    Kennedy, J., Eberhart, R.: Swarm Intelligence. Morgan Kaufmann, San Francisco (2001)Google Scholar
  11. 11.
    Kennedy, J., Eberhart, R.: Particle Swarm Optimization. In: Proc. IEEE Intl. Conf. on Neural Networks, Perth, Australia, vol. IV, pp. 1942–1948. IEEE Service Center, Piscataway (1995)CrossRefGoogle Scholar
  12. 12.
    Levenberg, K.: A method for the solution of certain problems in least squares. Quart. Appl. Math. 2, 164–168 (1944)MATHMathSciNetGoogle Scholar
  13. 13.
    Prechelt, L.: Proben1 - A set of neural network benchmark problems and benchmark rules. Technical Report 21/94, Fakultät für Informatik, Universität Karlsruhe, Germany (September 1994)Google Scholar
  14. 14.
    Dorigo, M., Maniezzo, V., Colorni, A.: Ant System: optimization by a colony of cooperating agents. IEEE Transactions on Systems, Man and Cybernetics - Part B 26(1), 29–41 (1996)CrossRefGoogle Scholar
  15. 15.
    Riedmiller, M.: Rprop - description and implementations details, Technical report, University of Karlsruhe (1994)Google Scholar
  16. 16.
    Treadgold, N.K., Gedeon, T.D.: Simulated annealing and weight decay in adaptive learning: the SARPROP algorithm. IEEE Transactions on Neural Networks 9, 662–668 (1998)CrossRefGoogle Scholar
  17. 17.
    Eberhart, R.C., Shi, Y.: Comparison between Genetic Algorithms and Particle Swarm Optimization. In: Porto, V.W., Waagen, D. (eds.) EP 1998. LNCS, vol. 1447, pp. 611–616. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  18. 18.
    Sexton, R.S., Alidaee, B., Dorsey, R.E., Johnson, J.D.: Global optimization for artificial neural networks: a tabu search application. European Journal of Operational Research 2(106), 570–584 (1998)CrossRefGoogle Scholar
  19. 19.
    Sexton, R.S., Dorsey, R.E., Johnson, J.D.: Optimization of neural networks: A comparative analysis of the genetic algorithm and simulated annealing. European Journal of Operational Research (114), 589–601 (1999)Google Scholar
  20. 20.
    Haykin, S.: Neural Networks: A comprehensive Foundation, 2nd edn. Prentice-Hall, Englewood Cliffs (1998)Google Scholar
  21. 21.
    Kirkpatrick, S., Gellat Jr., C.D., Vecchi, M.P.: Optimization by simulated annealing. Science 220, 671–680 (1983)CrossRefMathSciNetGoogle Scholar
  22. 22.
    Ludermir, T.B., Yamazaki, A., Zanchetin, C.: An Optimization Methodology for Neural Network Weights and Architectures. IEEE Transactions on Neural Networks 17(5) (to be published, 2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • M. Carvalho
    • 1
  • T. B. Ludermir
    • 1
  1. 1.Center of InformaticsFederal University of PernambucoRecifeBrazil

Personalised recommendations