Skip to main content
Log in

Dynamic ensemble extreme learning machine based on sample entropy

  • Focus
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

Extreme learning machine (ELM) as a new learning algorithm has been proposed for single-hidden layer feed-forward neural networks, ELM can overcome many drawbacks in the traditional gradient-based learning algorithm such as local minimal, improper learning rate, and low learning speed by randomly selecting input weights and hidden layer bias. However, ELM suffers from instability and over-fitting, especially on large datasets. In this paper, a dynamic ensemble extreme learning machine based on sample entropy is proposed, which can alleviate to some extent the problems of instability and over-fitting, and increase the prediction accuracy. The experimental results show that the proposed approach is robust and efficient.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  • Biggio B, Fumera G, Roli F (2010) Multiple classifier systems for robust classifier design in adversarial environments. Int J Mach Learn Cybern 1(1–4):27–41

    Article  Google Scholar 

  • Breiman L (1996) Bagging predictors. Mach Learn 6(2):123–140

    Google Scholar 

  • Brown G, Wyatt J, Harris R, Yao X (2005) Diversity creation methods: a survey and categorisation. Inf Fusion 6(1):5–20

    Article  Google Scholar 

  • Chacko BP, Vimal Krishnan VR, Raju G et al (2011) Handwritten character recognition using wavelet energy and extreme learning machine. Int J Mach Learn Cybern. doi:10.1007/s13042-011-0049-5

    Google Scholar 

  • Demsar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7(1):1–30

    MathSciNet  MATH  Google Scholar 

  • Emilio SO, Juan GS, Martín JD et al (2011) BELM: Bayesian extreme learning machine. IEEE Trans Neural Netw 22(3):505–509

    Article  Google Scholar 

  • Feng GR, Huang GB, Lin QP, Gay R (2009) Error minimized extreme learning machine with growth of hidden nodes and incremental learning. IEEE Trans Neural Netw 20(8):1352–1357

    Article  Google Scholar 

  • Freund Y, Schapire RE (1997) A decision-theoretic generalization of on-line learning and an application to boosting. J Comput Syst Sci 55(1):119–139

    Article  MathSciNet  MATH  Google Scholar 

  • Hansen LK, Salamon P (1990) Neural network ensembles. IEEE Trans Pattern Anal Mach Intell 12(10):993–1001

    Article  Google Scholar 

  • Huang GB, Chen L (2007) Convex incremental extreme learning machine. Neurocomputing 70(16–18):3056–3062

    Article  Google Scholar 

  • Huang GB, Zhu QY, Siew CK (2006a) Extreme learning machine: theory and applications. Neurocomputing 70(1–3):489–501

    Article  Google Scholar 

  • Huang GB, Chen L, Siew CK (2006b) Universal approximation using incremental constructive feed forward networks with random hidden nodes. IEEE Trans Neural Netw 17(4):879–892

    Article  Google Scholar 

  • Huang GB, Ding XJ, Zhou HM (2010) Optimization method based extreme learning machine for classification. Neurocomputing 74(1–3):155–163

    Article  Google Scholar 

  • Huang GB, Wang DH, Lan Y (2011) Extreme learning machines: a survey. Int J Mach Learn Cybern 2(2):107–122

    Article  Google Scholar 

  • José MM, Pablo EM, Emilio SO et al (2011) Regularized extreme learning machine for regression problems. Neurocomputing 74(17):3716–3721

    Article  Google Scholar 

  • Kittler J, Hatef M, Duin RPW et al (1998) On combining classifiers. IEEE Trans Pattern Anal Mach Intell 20(3):226–239

    Article  Google Scholar 

  • Ko AHR, Sabourin R, Britto AS (2008) From dynamic classifier selection to dynamic ensemble selection [J]. Pattern Recogn 41(5):1718–1731

    Article  MATH  Google Scholar 

  • Kuncheva LI (2001) Combining classifiers: soft computing solutions. In: Pal SK, Pal A (eds) Pattern recognition: from classical to modern approaches, World Scientific, pp 427–451

  • Kuncheva LI, Whitaker CJ (2003) Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach Learn 51(2):181–207

    Article  MATH  Google Scholar 

  • Liang NY, Huang GB, Saratchandran P, Sundararajan N (2006) A fast and accurate online sequential learning algorithm for feed forward networks. IEEE Trans Neural Netw 17(6):1411–1423

    Article  Google Scholar 

  • Liu N, Wang H (2010) Ensemble based extreme learning machine. IEEE Signal Process Lett 17(8):754–757

    Article  Google Scholar 

  • Mao S, Jiao LC, Xiong L, Gou SP (2011) Greedy optimization classifiers ensemble based on diversity. Pattern Recogn 44(6):1245–1261

    Article  MATH  Google Scholar 

  • Mohammed AA, Minhas R, Jonathan QM et al (2011) Human face recognition based on multidimensional PCA and extreme learning machine. Pattern Recogn 44(10–11):2588–2597

    Article  MATH  Google Scholar 

  • Rogova G (1994) Combining the results of several neural network classifiers. Neural Netw 7(5):777–781

    Article  Google Scholar 

  • Schapire RE (1990) The strength of weak learnability. Mach Learn 5(2):197–227

    Google Scholar 

  • Serre D (2002) Matrices: theory and applications. Springer, New York

    MATH  Google Scholar 

  • Wang XZ, Dong CR (2009) Improving generalization of fuzzy if-then rules by maximizing fuzzy entropy. IEEE Trans Fuzzy Syst 17(3):556–567

    Article  Google Scholar 

  • Wang GT, Li P (2010) Dynamic Adaboost ensemble extreme learning machine. In: 3rd international conference on advanced computer theory and engineering (ICACTE), vol 3, pp 54–58

  • Wang XZ, Zhai JH, Lu SX (2008) Induction of multiple fuzzy decision trees based on rough set technique. Inf Sci 178(16):3188–3202

    Article  MathSciNet  MATH  Google Scholar 

  • Wang X, Chen A, Feng H (2011a) Upper integral network with extreme learning mechanism. Neurocomputing 74(16):2520–2525

    Article  Google Scholar 

  • Wang YG, Cao FL, Yuan YB (2011b) A study on effectiveness of extreme learning machine. Neurocomputing 74(16):2483–2490

    Article  Google Scholar 

  • Wang XZ, Dong LC, Yan JH (2011c) Maximum ambiguity based sample selection in fuzzy decision tree induction. IEEE Trans Knowl Data Eng. doi:10.1109/TKDE.2011.67

    Google Scholar 

  • Woods K, Kegelmeyer WP, Bowyer K (1997) Combination of multiple classifiers using local accuracy estimates. IEEE Trans Pattern Anal Mach Intell 19(4):405–410

    Article  Google Scholar 

  • Wu J, Wang S, Fu-lai C (2011) Positive and negative fuzzy rule system, extreme learning machine and image classification. Int J Mach Learn Cybern 2(4):261–271

    Article  Google Scholar 

  • Wu X, Kumar V, Quinlan JR et al (2008) Top 10 algorithms in data mining. Knowl Inf Syst 14(1):1–37

    Article  Google Scholar 

  • Zhang S, McCullagh P, Nugent C, Zheng H, Baumgarten M (2011) Optimal model selection for posture recognition in home-based healthcare. Int J Mach Learn Cybern 1(2):1–14

    Article  Google Scholar 

  • Zhou ZH, Wu J, Tang W (2002) Ensembling neural networks: many could be better than all. Artif Intell 137(1–2):239–263

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

This research is supported by the national natural science foundation of China (61170040), by the natural science foundation of Hebei Province (F2010000323, F2011201063), by the Key Scientific Research Foundation of Education Department of Hebei Province (ZD2010139), and by the natural science foundation of Hebei University (2011-228).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jun-hai Zhai.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Zhai, Jh., Xu, Hy. & Wang, Xz. Dynamic ensemble extreme learning machine based on sample entropy. Soft Comput 16, 1493–1502 (2012). https://doi.org/10.1007/s00500-012-0824-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-012-0824-6

Keywords

Navigation