A Multi-population Cooperative Particle Swarm Optimizer for Neural Network Training

  • Ben Niu
  • Yun-Long Zhu
  • Xiao-Xian He
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3971)


This paper presents a new learning algorithm, Multi-Population Cooperative Particle Swarm Optimizer (MCPSO), for neural network training. MCPSO is based on a master-slave model, in which a population consists of a master group and several slave groups. The slave groups execute a single PSO or its variants independently to maintain the diversity of particles, while the master group evolves based on its own information and also the information of the slave groups. The particles both in the master group and the slave groups are co-evolved during the search process by employing a parameter, termed migration factor. The MCPSO is applied for training a multilayer feed-forward neural network, for three benchmark classification problems. The performance of MCPSO used for neural network training is compared to that of Back Propagation (BP), genetic algorithm (GA) and standard PSO (SPSO), demonstrating its effectiveness and efficiency.


Particle Swarm Optimization Back Propagation Back Propagation Neural Network Neural Network Training Standard Particle Swarm Optimization 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Nekovei, R., Sun, Y.: Back-propagation Network and its Configuration for Blood Vessel Detection in Angiograms. IEEE Trans. Neural Networks 6(1), 64–72 (1995)CrossRefGoogle Scholar
  2. 2.
    Billings, S.A., Zheng, G.L.: Radial Basis Function Network Configuration Using Genetic Algorithms. Neural Networks 8(6), 877–890 (1995)CrossRefGoogle Scholar
  3. 3.
    Tandon, V., El-Mounayri, H., Kishawy, H.: NC End Milling Optimization Using Evolutionary Computation. Int. J. Mach. Tools Manuf. 42(5), 595–605 (2002)CrossRefGoogle Scholar
  4. 4.
    Angeline, P.J.: Evolutionary Optimization versus Particle Swarm Optimization: Philosophy and Performance Difference. In: Porto, V.W., Waagen, D. (eds.) EP 1998. LNCS, vol. 1447, pp. 745–754. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  5. 5.
    Kennedy, J., Eberhart, R.C., Shi, Y.: Swarm Intelligence. Morgan Kaufmann Publishers, San Francisco (2001)Google Scholar
  6. 6.
    Clerc, M., Kennedy, J.: The Particle Swarm: Explosion, Stability, and Convergence in a Multidimensional Complex Space. IEEE Trans. Evol. Comput. 6(1), 58–73 (2002)CrossRefGoogle Scholar
  7. 7.
    Niu, B., Zhu, Y.L., He, X.X.: Construction of Fuzzy Models for Dynamic Systems Using Multi-population Cooperative Particle Swarm Optimizer. In: Wang, L., Jin, Y. (eds.) FSKD 2005. LNCS (LNAI), vol. 3613, pp. 987–1000. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  8. 8.
    Niu, B., Zhu, Y.L., He, X.X.: Multi-population Cooperative Particle Swarm Optimization. In: Capcarrère, M.S., Freitas, A.A., Bentley, P.J., Johnson, C.G., Timmis, J. (eds.) ECAL 2005. LNCS (LNAI), vol. 3630, pp. 874–883. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  9. 9.
    Genkai-Kato, M., Yamamura, N.: Evolution of Mutualistic Symbiosis without Vertical Transmission. Comp. Biochem. Physiol. 123(3), 269–278 (1999)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Ben Niu
    • 1
    • 2
  • Yun-Long Zhu
    • 1
  • Xiao-Xian He
    • 1
    • 2
  1. 1.Shenyang Institute of AutomationChinese Academy of SciencesShenyangChina
  2. 2.Graduate School of the Chinese Academy of SciencesBeijingChina

Personalised recommendations