Improvement of the Backpropagation Algorithm Using (1+1) Evolutionary Strategies

  • José Parra Galaviz
  • Patricia Melin
  • Leonardo Trujillo
Part of the Studies in Computational Intelligence book series (SCI, volume 312)


Currently, the standard in supervised Artificial Neural Networks (ANNs) research is to use the backpropagation (BP) algorithm or one of its improved variants, for training. In this chapter, we present an improvement to the most widely used BP learning algorithm using (1+1) evolutionary Strategy (ES), one of the most widely used artificial evolution paradigms. The goal is to provide a method that can adaptively change the main learning parameters of the BP algorithm in an unconstrained manner. The BP/ES algorithm we propose is simple to implement and can be used in combination with various improved versions of BP. In our experimental tests we can see a substantial improvement in ANN performance, in some cases a reduction of more than 50% in error for time series prediction on a standard benchmark test. Therefore, we believe that our proposal effectively combines the learning abilities of BP with the global search of ES to provide a useful tool that improves the quality of learning for BP-based methods.


Learning Rate Connection Weight Momentum Coefficient Gaussian Mutation Adaptive Learning Rate 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Belew, R.K., McInerney, J., Schraudolph, N.N.: Evolving networks:Using the Genetic Algorithm with connectionist learning. In: Proc. Second Artificial Life Conference, NewYork, pp. 511–547. Addison-Wesley, Reading (1991)Google Scholar
  2. 2.
    De Jong, K.: A unified approach to Evolutionary Computation. In: Proceedings of the 11th Annual Conference Companion on Genetic and Evolutionary Computation Conference: Late Breaking Papers, GECCO 2009, Montreal, Québec, Canada, July 08-12, pp. 2811–2824. ACM, New York (2009)CrossRefGoogle Scholar
  3. 3.
    Eiben, A.E., Smith, J.E.: 2003 Introduction to Evolutionary Computing. Springer, Heidelberg (2003)Google Scholar
  4. 4.
    Frean, M.: The upstart algorithm: A method for constructing and training feedforward neural networks. Neural Computation 2(2), 198–209 (1990)CrossRefGoogle Scholar
  5. 5.
    Gurney, K.: An Introduction to Neural Networks. Taylor & Francis, Inc., Abington (1997)CrossRefGoogle Scholar
  6. 6.
    Hagan, M.T., Demuth, H.B., Beale, M.: Neural Network Design. PWS Publishing Co. (1996)Google Scholar
  7. 7.
    Harp, S.A., Samad, T., Guha, A.: Toward the genetic synthesis of neural networks. In: Schaffer, J.D. (ed.) Proc. 3rd Int. Conf. Genetic Algorithms and Their Applications, pp. 360–369. Morgan Kaufmann, San Mateo (1989)Google Scholar
  8. 8.
    Pedro, I.V.: Redes Neuronales Artificiales: Un Enfoque Práctico. Pearson Educacion, London (2004)Google Scholar
  9. 9.
    Jacobs, R.A.: Increased Rates of Convergence Through Learning Rate Adaptation. Technical Report. UMI Order Number: UM-CS-1987-117., University of Massachusetts (1987)Google Scholar
  10. 10.
    Kim, H.B., Jung, S.H., Kim, T.G., Park, K.H.: Fast learning method for back-propagation neural network by evolutionary adaptation of learning rates. Neurocomput. 11(1), 101–106 (1996)zbMATHCrossRefGoogle Scholar
  11. 11.
    Lee, S.-W.: Off-line recognition of totally unconstrained handwritten numerals using multilayer cluster neural network. IEEE Trans. Pattern Anal. Machine Intell. 18, 648–652 (1996)CrossRefGoogle Scholar
  12. 12.
    Merelo, J.J., Paton, M., Cañas, A., Prieto, A., Moran, F.: Optimization of a competitive learning neural network by genetic algorithms. In: Mira, J., Cabestany, J., Prieto, A.G. (eds.) IWANN 1993. LNCS, vol. 686, pp. 185–192. Springer, Heidelberg (1993)Google Scholar
  13. 13.
    Patel, D.: Using genetic algorithms to construct a network for financial prediction. In: Proc. SPIE: Applications of Artificial Neural Networks in Image Processing, Bellingham, WA, pp. 204–213 (1996)Google Scholar
  14. 14.
    Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. In: Rumelhart, D.E., McClelland, J.L. (eds.) Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Foundations. MIT Press Computational Models of Cognition And Perception Series, vol. 1, pp. 318–362. MIT Press, Cambridge (1986)Google Scholar
  15. 15.
    Samarasinghe, S.: Neural Networks for Applied Sciences and Engineering. Auerbach Publications (2006)Google Scholar
  16. 16.
    Schefel, H.-P.: Numerische Optimierung von Computer-Modellen mittels der Wvolutionsstrategie. ISR, vol. 26. Birkhaeuser, Basel (1997)Google Scholar
  17. 17.
    Skinner, A.J., Broughton, J.Q.: Neural networks in computational materials science: Training algorithms. Modeling and Simulation in Materials Sci. Eng. 3(3), 371–390 (1995)CrossRefGoogle Scholar
  18. 18.
    Topchy, A.P., Lebedko, O.A.: Neural network training by means of cooperative evolutionary search. Nuclear Instrum. Methods in Phys. Res., Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 389(1-2), 240–241 (1997)CrossRefGoogle Scholar
  19. 19.
    Whitehead, B.A., Choate, T.D.: Evolving space-filling curves to distribute radial basis functions over an input space. IEEE Trans. Neural Networks 5, 15–23 (1994)CrossRefGoogle Scholar
  20. 20.
    Whitley, D., Starkweather, T., Bogart, C.: Genetic algorithms and neural networks: Optimizing connections and connectivity. Parallel Comput 14(3), 347–361 (1990)CrossRefGoogle Scholar
  21. 21.
    Yao, X.: Evolving artificial neural networks. Proceedings of the IEEE 87(9), 1423–1447 (1999)CrossRefGoogle Scholar
  22. 22.
    Liu, Y., Yao, X.: Evolutionary design of artificial neural networks with different nodes. In: Proc. 1996 IEEE Int. Conf. Evolutionary Computation (ICEC 1996), Nagoya, Japan, pp. 670–675 (1996)Google Scholar
  23. 23.
    Hwang, M.W., Choi, J.Y., Park, J.: Evolutionary projection neural networks. In: Proc. 1997 IEEE Int. Conf. Evolutionary Computation, ICEC 1997, pp. 667–671 (1997)Google Scholar
  24. 24.
    Sebald, A.V., Chellapilla, K.: On making problems evolutionarily friendly, part I: Evolving the most convenient representations. In: Porto, W., Saravanan, N., Waagen, D., Eiben, A.E. (eds.) EP 1998. LNCS, vol. 1447, pp. 271–280. Springer, Heidelberg (1998)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • José Parra Galaviz
    • 1
  • Patricia Melin
    • 1
  • Leonardo Trujillo
    • 1
  1. 1.Instituto Tecnológico de TijuanaTijuanaMéxico

Personalised recommendations