Skip to main content
Log in

Self-Adaptive Evolutionary Extreme Learning Machine

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

In this paper, we propose an improved learning algorithm named self-adaptive evolutionary extreme learning machine (SaE-ELM) for single hidden layer feedforward networks (SLFNs). In SaE-ELM, the network hidden node parameters are optimized by the self-adaptive differential evolution algorithm, whose trial vector generation strategies and their associated control parameters are self-adapted in a strategy pool by learning from their previous experiences in generating promising solutions, and the network output weights are calculated using the Moore–Penrose generalized inverse. SaE-ELM outperforms the evolutionary extreme learning machine (E-ELM) and the different evolutionary Levenberg–Marquardt method in general as it could self-adaptively determine the suitable control parameters and generation strategies involved in DE. Simulations have shown that SaE-ELM not only performs better than E-ELM with several manually choosing generation strategies and control parameters but also obtains better generalization performances than several related methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Huang G-B, Zhu Q-Y, Siew C-K (2004) Extreme learning machine: a new learning scheme of feedforward neural networks. In: Proceedings of international joint conference on neural networks, Budapest, Hungary, pp 985–990

  2. Huang G-B, Zhu Q-Y, Siew C-K (2006) Extreme learning machine: theory and applications. Neurocomputing 70(1-3): 489–501

    Article  Google Scholar 

  3. Cao JW, Lin ZP, Huang G-B (2011) Composite function wavelet neural networks with differential evolution and extreme learning machine. Neural Process Lett 33(3): 251–265

    Article  Google Scholar 

  4. Cao JW, Lin ZP, Huang G-B (2012) Voting based extreme learning machine. Inf Sci 185(1): 66–77

    Article  MathSciNet  Google Scholar 

  5. Balabin RM, Safieva RZ, Lomakina EI (2010) Gasoline classification using near infrared (NIR) spectroscopy data: comparison of multivariate techniques. Anal Chim Acta 671: 27–35

    Article  Google Scholar 

  6. Balabin RM, Safieva RZ (2011) Biodiesel classification by base stock type (vegetable oil) using near infrared (NIR) spectroscopy data. Anal Chim Acta 689(2): 190–197

    Article  Google Scholar 

  7. Balabin RM, Lomakina EI (2011) Support vector machine regression (LS-SVM)—an alternative to artificial neural networks (ANNs) for the analysis of quantum chemistry data?. Phys Chem Chem Phys 13(24): 11710–11718

    Article  Google Scholar 

  8. Balabin RM, Lomakina EI (2011) Support vector machine regression (SVR/LS-SVM) - an alternative to neural networks (ANN) for analytical chemistry? Comparison of nonlinear methods on near infrared (NIR) spectroscopy data. Analyst 136(8): 1703–1712

    Article  Google Scholar 

  9. Hsu C-W, Lin C-J (2002) A comparison of methods for multiclass support vector machines. IEEE Trans Neural Netw 13(2): 415–425

    Article  Google Scholar 

  10. Levenberg K (1944) A method for the solution of certain non-linear problems in least squares. Q Appl Math 2: 164–168

    MathSciNet  MATH  Google Scholar 

  11. Hagan MT, Menhaj MB (1994) Training feedforward networks with the marquardt algorithm. IEEE Trans Neural Netw 5(6): 989–993

    Article  Google Scholar 

  12. Huang G-B, Chen L, Siew CK (2006) Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans Neural Netw 17(4): 879–892

    Article  Google Scholar 

  13. Liang N-Y, Huang G-B, Saratchandran P, Sundararajan N (2006) A fast and accurate on-line sequential learning algorithm for feedforward networks. IEEE Trans Neural Netw 17(6): 1411–1423

    Article  Google Scholar 

  14. Huang G-B, Chen L (2007) Convex incremental extreme learning machine. Neurocomputing 70(16-18): 3056–3062

    Article  Google Scholar 

  15. Huang G-B, Chen L (2008) Enhanced random search based incremental extreme learning machine. Neurocomputing 71(16-18): 3460–3468

    Article  Google Scholar 

  16. Rong H-J, Huang G-B, Saratchandran P, Sundararajan N (2009) On-line sequential fuzzy extreme learning machine for function approximation and classification problems. IEEE Trans Syst Man Cybern B 39(4): 1067–1072

    Article  Google Scholar 

  17. Lan Y, Soh YC, Huang G-B (2009) Ensemble of online sequential extreme learning machine. Neurocomputing 72(13-15): 3391–3395

    Article  Google Scholar 

  18. Storn R, Price K (2004) Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces. J Glob Optim 11(4): 341–359

    Article  MathSciNet  Google Scholar 

  19. Ilonen J, Kamarainen JI, Lampinen J (2003) Differential evolution training algorithm for feedforward neural networks. Neural Process Lett 17: 93–105

    Article  Google Scholar 

  20. Subudhi B, Jena D (2008) Differential evolution and levenberg marquardt trained neural network scheme for nonlinear system identification. Neural Process Lett 27: 285–296

    Article  Google Scholar 

  21. Zhu Q-Y, Qin A-K, Suganthan P-N, Huang G-B (2005) Evolutionary extreme learning machine. Pattern Recog 38(10): 1759–1763

    Article  MATH  Google Scholar 

  22. Qin A-K, Huang V-L, Suganthan P-N (2009) Differential evolution algorithm with strategy adaptation for global numerical optimization. IEEE Trans Evol Comput 13(2): 398–417

    Article  Google Scholar 

  23. Bartlett PL (1998) The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE Trans Inf Theory 44(2): 525–536

    Article  MathSciNet  MATH  Google Scholar 

  24. Cao JW, Lin ZP, Huang G-B (2010) Composite function wavelet neural networks with extreme learning machine. Neurocomputing 73(7-9): 1405–1416

    Article  Google Scholar 

  25. Gämperle R, Müller SD, Koumoutsakos P (2002) A parameter study for differential evolution. In: Proceedings of WSEAS international conference on advances in intelligent systems, fuzzy systems, evolutionary computation. Interlaken, Switzerland, pp 293–298

  26. Ronkkonen J, Kukkonen S, Price KV (2005) Real-parameter optimization with differential evolution. In: Proceedings of IEEE congress evolutionary computation, Edinburgh, Scotland, pp 506–513

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jiuwen Cao.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Cao, J., Lin, Z. & Huang, GB. Self-Adaptive Evolutionary Extreme Learning Machine. Neural Process Lett 36, 285–305 (2012). https://doi.org/10.1007/s11063-012-9236-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-012-9236-y

Keywords

Navigation