Artificial Bee Colony (ABC) Optimization Algorithm for Training Feed-Forward Neural Networks

  • Dervis Karaboga
  • Bahriye Akay
  • Celal Ozturk
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4617)

Abstract

Training an artificial neural network is an optimization task since it is desired to find optimal weight set of a neural network in training process. Traditional training algorithms has some drawbacks such as getting stuck in local minima and computational complexity. Therefore, evolutionary algorithms are employed to train neural networks to overcome these issues. In this work, Artificial Bee Colony (ABC) Algorithm which has good exploration and exploitation capabilities in searching optimal weight set is used in training neural networks.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Rumelhart, D.E., Williams, R.J., Hinton, G.E.: Learning internal representations by error propagation. Parallel Distributed Processing: Explorations in the Microstructure of Cognition 1, 318–362 (1986)Google Scholar
  2. 2.
    Liu, Z., Liu, A., Wang, C., Niu, Z.: Evolving neural network using real coded genetic algorithm (GA) for multispectral image classification. Future Generation Computer Systems 20, 1119–1129 (2004)CrossRefGoogle Scholar
  3. 3.
    Craven, M.P.: A Faster Learning Neural Network Classifier Using Selective Backpropagation. In: Proceedings of the Fourth IEEE International Conference on Electronics, Circuits and Systems, Cairo, Egypt, December 15-18, 1997, vol. 1, pp. 254–258 (1997)Google Scholar
  4. 4.
    Whitley, D., Starkweather, T., Bogart, C.: Genetic algorithms and neural networks: Optimizing connections and connectivity. Techn. Rep. CS 89-117, Department of Computer Science, Colorado State University (1989)Google Scholar
  5. 5.
    Rudnick, M.: A bibliography of the intersection of genetic search and artificial neural networks. Techn. Rep. No. CS/E 90- 001. Oregon Graduate Center, Beaverton, OR (1990)Google Scholar
  6. 6.
    Weiß, G.: Combining neural and evolutionary learning: Aspects and approaches. Techn. Rep. FKI-132-90. Institut fur Informatik, Technische Universit at Munchen (1990)Google Scholar
  7. 7.
    Weiß, G.: Towards the synthesis of neural and evolutionary learning. In: Omidvar, O. (ed.) Progress in neural networks, ch. 5, vol. 5, Ablex, Norwood, NJ (1993)Google Scholar
  8. 8.
    Albrecht, R.F., Reeves, C.R., Steele, N.C.: Artificial neural nets and genetic algorithms. Proceedings of the International Conference, Innsbruck, Austria. Springer, Heidelberg (1993)MATHGoogle Scholar
  9. 9.
    Jones, A.J.: Genetic algorithms and their applications to the design of neural networks. Neural Computing & Applications 1, 32–45 (1993)CrossRefGoogle Scholar
  10. 10.
    Ling, S.H., Lam, H.K., Leung, F.H.F., Lee, Y.S.: A Genetic Algorithm Based Variable Structure Neural Network, Industrial Electronics Society, 2003. In: IECON 2003. The 29th Annual Conference of the IEEE, vol. 1, pp. 436–441 (2003)Google Scholar
  11. 11.
    Marshall, S.J., Harrison, R.F.: Optimization and Training of Feedforward Neural Networks By Genetic Algorithms, Artificial Neural Networks, 1991. In: Second International Conference on, November 18-20, 1991, pp. 39–43 (1991)Google Scholar
  12. 12.
    Verma, B., Ghosh, R.: A novel evolutionary neural learning algorithm, Evolutionary Computation, 2002. In: CEC 2002. Proceedings of the 2002 Congress on, vol. 2, pp. 1884–1889 (2002)Google Scholar
  13. 13.
    Gao, Q., Lei, K.Q.Y., He, Z.: An Improved Genetic Algorithm and Its Application in Artificial Neural Network, Information, Communications and Signal Processing, 2005. In: Fifth International Conference on, December 06-09, 2005, pp. 357–360 (2005)Google Scholar
  14. 14.
    Yao, X.: Evolvutionary artificial neural networks. International Journal of Neural Systems 4(3), 203–222 (1993)CrossRefGoogle Scholar
  15. 15.
    Eberhart, R.C., Dobbins, R.W.: Designing Neural Network explanation facilities using Genetic algorithms. In: IEEE International Joint Conference on Publication, November 18-21,1991, vol. 2, pp. 1758–1763 (1991)Google Scholar
  16. 16.
    Jelodar, M.S., Fakhraie, S.M., Ahmadabadi, M.N.: A New Approach for Training of Artificial Neural Networks using Population Based Incremental Learning (PBIL). In: International Conference on Computational Intelligence, pp. 165–168 (2004)Google Scholar
  17. 17.
    Tsai, J.T., Chou, J.H., Liu, T.K.: Tuning the Structure and Parameters of a Neural Network by Using Hybrid Taguchi-Genetic Algorithm. IEEE Transactions on Neural Networks 17(1) (2006)Google Scholar
  18. 18.
    El-Gallad, A.I., El-Hawary, M., Sallam, A.A., Kalas, A.: Swarm-intelligently trained neural network for power transformer protection. In: Canadian Conference on Electrical and Computer Engineering, vol. 1, pp. 265–269 (2001)Google Scholar
  19. 19.
    Mendes, R., Cortez, P., Rocha, M., Neves, J.: Particle swarm for feedforward neural network training. In: Proceedings of the International Joint Conference on Neural Networks, vol. 2, pp. 1895–1899 (2002)Google Scholar
  20. 20.
    Van der Bergh, F., Engelbrecht, A.: Cooperative learning in neural networks using particle swarm optimizers. South African Computer Journal 26, 84–90 (2000)Google Scholar
  21. 21.
    Ismail, A., Engelbrecht, A.: Global optimization algorithms for training product unit neural networks. In: IJCNN 2000. International Joint Conference on Neural Networks, vol. 1, pp. 132–137. IEEE Computer Society, Los Alamitos, CA (2000)Google Scholar
  22. 22.
    Kennedy, J., Eberhart, R.: Swarm Intellegence. Morgan Kaufmann Publishers, San Francisco (2001)Google Scholar
  23. 23.
    Meissner, M., Schmuker, M., Schneider, G.: Optimized Particle Swarm Optimization (OPSO) and its application to artificial neural network training. BMC Bioinformatics 7, 125 (2006)CrossRefGoogle Scholar
  24. 24.
    Fan, H., Lampinen, J.: A Trigonometric Mutation Operation to Differential Evolution. Journal of Global Optimization 27, 105–129 (2003)MATHCrossRefMathSciNetGoogle Scholar
  25. 25.
    Ilonen, J., Kamarainen, J.I., Lampinen, J.: Differential Evolution Training Algorithm for Feed-Forward Neural NetworksGoogle Scholar
  26. 26.
    Pavlidis, N.G., Tasoulis, D.K., Plagianakos, V.P., Nikiforidis, G., Vrahatis, M.N.: Spiking Neural Network Training Using Evolutionary Algorithms, Neural Networks, 2005. In: IJCNN 2005. Proceedings 2005 IEEE International Joint Conference, vol. 4, pp. 2190–2194 (2005)Google Scholar
  27. 27.
    Plagianakos, V.P., Vrahatis, M.N.: Training Neural Networks with Threshold Activation Functions and Constrained Integer Weights, Neural Networks, 2000. In: IJCNN 2000. Proceedings of the IEEE-INNS-ENNS International Joint Conference, vol. 5, pp. 161–166 (2000)Google Scholar
  28. 28.
    Plagianakos, V.P., Vrahatis, M.N.: Neural Network Training with Constrained Integer Weights, Evolutionary Computation, 1999. In: CEC 1999. Proceedings of the 1999 Congress, vol. 3, p. 2013 (1999)Google Scholar
  29. 29.
    Yu, B., He, X.: Training Radial Basis Function Networks with Differential Evolution, Granular Computing, 2006. In: IEEE International Conference on, May 10-12, 2006, pp. 369–372 (2006)Google Scholar
  30. 30.
    Yao, X., Liu, Y.: A New Evolutionary System for Evolving Artificial Neural Networks. IEEE Transactions on Neural Networks 8(3), 694–713 (1997)CrossRefMathSciNetGoogle Scholar
  31. 31.
    Gao, W.: New Evolutionary Neural Networks, 2005. In: First International Conference on Neural Interface and Control Proceedings, May 26-28, 2005, Wuhan, China (2005)Google Scholar
  32. 32.
    Davoian, K., Lippe, W.: A New Self-Adaptive EP Approach for ANN Weights Training, Enformatika Transactions on Engineering, Computing and Technology, Barcelona, Spain, October 22-24, 2006, vol. 15, pp. 109–114 (2006)Google Scholar
  33. 33.
    Leung, C., Member, Chow, W.S.: A Hybrid Global Learning Algorithm Based on Global Search and Least Squares Techniques for Backpropagation Networks, Neural Networks,1997. In: International Conference on, vol. 3, pp. 1890–1895 (1997)Google Scholar
  34. 34.
    Karaboga, D.: An Idea Based On Honey Bee Swarm For Numerical Optimization, Technical Report-TR06, Erciyes University, Engineering Faculty, Computer Engineering Department (2005)Google Scholar
  35. 35.
    Basturk, B., Karaboga, D.: An Artificial Bee Colony (ABC) Algorithm for Numeric function Optimization. In: IEEE Swarm Intelligence Symposium, May 12-14, 2006, Indianapolis, Indiana, USA (2006)Google Scholar
  36. 36.
    Weiß, G.: Neural Networks and Evolutionary Computation. PartI: Hybrid Approaches in Artificial Intelligence. In: International Conference on Evolutionary Computation, pp. 268–272 (1994)Google Scholar
  37. 37.
    Fahlman, S.: An empirical study of learning speed in back-propagation networks, Technical Report CMU-CS-88-162, Carnegie Mellon University, Pittsburgh, PA 15213 (September 1988)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Dervis Karaboga
    • 1
  • Bahriye Akay
    • 1
  • Celal Ozturk
    • 1
  1. 1.Erciyes University, Engineering Faculty, Department of Computer Engineering 

Personalised recommendations