Skip to main content

Advertisement

Log in

An optimizing BP neural network algorithm based on genetic algorithm

  • Published:
Artificial Intelligence Review Aims and scope Submit manuscript

Abstract

A back-propagation (BP) neural network has good self-learning, self-adapting and generalization ability, but it may easily get stuck in a local minimum, and has a poor rate of convergence. Therefore, a method to optimize a BP algorithm based on a genetic algorithm (GA) is proposed to speed the training of BP, and to overcome BP’s disadvantage of being easily stuck in a local minimum. The UCI data set is used here for experimental analysis and the experimental result shows that, compared with the BP algorithm and a method that only uses GA to learn the connection weights, our method that combines GA and BP to train the neural network works better; is less easily stuck in a local minimum; the trained network has a better generalization ability; and it has a good stabilization performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Chen GC, Yu JS (2005) Particle swarm optimization neural network and its application in soft-sensing modeling [J]. Lecture Notes Comput Sci 3611: 610–617

    Article  Google Scholar 

  • Eysa S, Saeed G (2005) Optimum design of structures by an improved genetic algorithm using neural networks [J]. Adv Eng Softw 36(11–12): 757–767

    Google Scholar 

  • Ghosh R, Verma B (2003) A hierarchical method for finding optimal architecture and weights using evolutionary least square based learning [J]. Int J Neural Syst 13(1): 13–24

    Article  Google Scholar 

  • Gupta JND, Sexton RS (1999) Comparing backpropagation with a genetic algorithm for neural network training [J]. Omega 27(6): 679–684

    Article  Google Scholar 

  • Harpham C et al (2004) A review of genetic algorithms applied to training radial basis function networks [J]. Neural Comput Appl 13(3): 193–201

    Article  Google Scholar 

  • Meng XP et al (2000) A hybrid method of GA and BP for short-term economic dispatch of hydrothermal power systems [J]. Math Comput Simul 51(3–4): 341–348

    Google Scholar 

  • UCI Maching Learning Repository. http://archive.ics.uci.edu/ml

  • Venkatesan D et al (2009) A genetic algorithm-based artificial neural network model for the optimization of machining processes [J]. Neural Comput Appl 18(2): 135–140

    Article  MathSciNet  Google Scholar 

  • Yao X, Islam MM (2008) Evolving artificial neural network ensembles [J]. IEEE Comput Intell Mag 3(1): 31–42

    Article  Google Scholar 

  • Yao WS et al (2004) The researching overview of evolutionary neural networks. Comput Sci 31(3): 125–129

    Google Scholar 

  • Yao X (1999) Evolving artificial neural networks [J]. Proc IEEE 87(9): 1423–1447

    Article  Google Scholar 

  • Yao X, Xu Y (2006) Recent advances in evolutionary computation [J]. J Comput Sci Technol 21(1): 1–18

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shifei Ding.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ding, S., Su, C. & Yu, J. An optimizing BP neural network algorithm based on genetic algorithm. Artif Intell Rev 36, 153–162 (2011). https://doi.org/10.1007/s10462-011-9208-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10462-011-9208-z

Keywords

Navigation