A Gaussian Particle Swarm Optimization for Training a Feed Forward Neural Network

Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 293)

Abstract

This paper proposes a Gaussian-PSO algorithm which provides the optimized parameters for Feed Forward Neural Network. Recently the Feed Forward Neural Network is widely used in various applications as a result of its advantages such as learning capability, auto-organization and auto-adaptation. However the Neural Network has the disadvantage itself to slowly converge and get easily trapped in a local minima. In this paper, Gaussian distributed random variables are used in the PSO algorithm to enhance its performance and train the weights and bias in the Neural Network. In comparison with the Back Propagation Neural Network, the Gaussian PSO-Neural Network faster converges and is immuned to the local minima.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Chen, C.H., Lai, H.: An empirical study of the Gradient descent and the conjugate gradient backpropagation neural networks. In: Proceedings OCEANS 1992 Mastering the Oceans Through Technology, vol. 1, pp. 132–135 (1991)Google Scholar
  2. 2.
    Eberhart, R.C., Shi, Y.: A Modified Particle Swarm Optimizer. In: Proceedings of IEEE International Conference On Evolutionary Computation, pp. 69–73 (1998)Google Scholar
  3. 3.
    Gudise, V.G., Venaayagamoorthy, G.K.: Comparison of Particle Swarm Optimization and Back Propagation as Training algorithms for Neural Networks. In: Proceedings of the IEEE Swarm Intelligence Symposium 2003 (SIS 2003), pp. 110–117 (2003)Google Scholar
  4. 4.
    Hagan, M.T., Menhaj, M.B.: Training feed forward networks with the Marquadt algorithm. In: IEEE International Conference on Tools with Artificial Intelligence, pp. 45–49 (1997)Google Scholar
  5. 5.
    Kennedy, J., Eberhart, R.C.: Particle Swarm Optimization. In: Proc. IEEE Conf. Neural Networks IV, vol. 4, pp. 1942–1948 (1995)Google Scholar
  6. 6.
    Marsaglia, G., Wai, W.T.: The Ziggurat Method for generating Random Variables. Journal of Statistical Software 5(8), 1–7 (2000)Google Scholar
  7. 7.
    Moller, M.F.: A scaled conjugate gradient algorithm for fast supervised learning PB-339 Reprint, Computer science Department, University of Aaurhus, Denmark (1990)Google Scholar
  8. 8.
    Salerno, J.: Using the particle swarm optimization technique to train a recurrent neural network. In: Proc. of Ninth IEEE Int. Conf. on Tools with Artificial Intelligence, pp. 45–49 (1997)Google Scholar
  9. 9.
    Settles, M., Raylander, B.: Neural network learning using particle swarm optimizers. In: Advances in Information Science and Soft Computing, pp. 224–226 (2002)Google Scholar
  10. 10.
    Zhang, J.: A hybrid particle swarm optimization-back propagation algorithm for feed forward neural network training. Applied Mathematics and Computation 185, 1026–1037 (2007)CrossRefMATHGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  1. 1.Graduate School of Information, Production and SystemWaseda UniversityFukuokaJapan

Personalised recommendations