Abstract
Extreme learning machine (ELM) as a new learning algorithm has been proposed for single-hidden layer feed-forward neural networks, ELM can overcome many drawbacks in the traditional gradient-based learning algorithm such as local minimal, improper learning rate, and low learning speed by randomly selecting input weights and hidden layer bias. However, ELM suffers from instability and over-fitting, especially on large datasets. In this paper, a dynamic ensemble extreme learning machine based on sample entropy is proposed, which can alleviate to some extent the problems of instability and over-fitting, and increase the prediction accuracy. The experimental results show that the proposed approach is robust and efficient.
Similar content being viewed by others
References
Biggio B, Fumera G, Roli F (2010) Multiple classifier systems for robust classifier design in adversarial environments. Int J Mach Learn Cybern 1(1–4):27–41
Breiman L (1996) Bagging predictors. Mach Learn 6(2):123–140
Brown G, Wyatt J, Harris R, Yao X (2005) Diversity creation methods: a survey and categorisation. Inf Fusion 6(1):5–20
Chacko BP, Vimal Krishnan VR, Raju G et al (2011) Handwritten character recognition using wavelet energy and extreme learning machine. Int J Mach Learn Cybern. doi:10.1007/s13042-011-0049-5
Demsar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7(1):1–30
Emilio SO, Juan GS, Martín JD et al (2011) BELM: Bayesian extreme learning machine. IEEE Trans Neural Netw 22(3):505–509
Feng GR, Huang GB, Lin QP, Gay R (2009) Error minimized extreme learning machine with growth of hidden nodes and incremental learning. IEEE Trans Neural Netw 20(8):1352–1357
Freund Y, Schapire RE (1997) A decision-theoretic generalization of on-line learning and an application to boosting. J Comput Syst Sci 55(1):119–139
Hansen LK, Salamon P (1990) Neural network ensembles. IEEE Trans Pattern Anal Mach Intell 12(10):993–1001
Huang GB, Chen L (2007) Convex incremental extreme learning machine. Neurocomputing 70(16–18):3056–3062
Huang GB, Zhu QY, Siew CK (2006a) Extreme learning machine: theory and applications. Neurocomputing 70(1–3):489–501
Huang GB, Chen L, Siew CK (2006b) Universal approximation using incremental constructive feed forward networks with random hidden nodes. IEEE Trans Neural Netw 17(4):879–892
Huang GB, Ding XJ, Zhou HM (2010) Optimization method based extreme learning machine for classification. Neurocomputing 74(1–3):155–163
Huang GB, Wang DH, Lan Y (2011) Extreme learning machines: a survey. Int J Mach Learn Cybern 2(2):107–122
José MM, Pablo EM, Emilio SO et al (2011) Regularized extreme learning machine for regression problems. Neurocomputing 74(17):3716–3721
Kittler J, Hatef M, Duin RPW et al (1998) On combining classifiers. IEEE Trans Pattern Anal Mach Intell 20(3):226–239
Ko AHR, Sabourin R, Britto AS (2008) From dynamic classifier selection to dynamic ensemble selection [J]. Pattern Recogn 41(5):1718–1731
Kuncheva LI (2001) Combining classifiers: soft computing solutions. In: Pal SK, Pal A (eds) Pattern recognition: from classical to modern approaches, World Scientific, pp 427–451
Kuncheva LI, Whitaker CJ (2003) Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach Learn 51(2):181–207
Liang NY, Huang GB, Saratchandran P, Sundararajan N (2006) A fast and accurate online sequential learning algorithm for feed forward networks. IEEE Trans Neural Netw 17(6):1411–1423
Liu N, Wang H (2010) Ensemble based extreme learning machine. IEEE Signal Process Lett 17(8):754–757
Mao S, Jiao LC, Xiong L, Gou SP (2011) Greedy optimization classifiers ensemble based on diversity. Pattern Recogn 44(6):1245–1261
Mohammed AA, Minhas R, Jonathan QM et al (2011) Human face recognition based on multidimensional PCA and extreme learning machine. Pattern Recogn 44(10–11):2588–2597
Rogova G (1994) Combining the results of several neural network classifiers. Neural Netw 7(5):777–781
Schapire RE (1990) The strength of weak learnability. Mach Learn 5(2):197–227
Serre D (2002) Matrices: theory and applications. Springer, New York
Wang XZ, Dong CR (2009) Improving generalization of fuzzy if-then rules by maximizing fuzzy entropy. IEEE Trans Fuzzy Syst 17(3):556–567
Wang GT, Li P (2010) Dynamic Adaboost ensemble extreme learning machine. In: 3rd international conference on advanced computer theory and engineering (ICACTE), vol 3, pp 54–58
Wang XZ, Zhai JH, Lu SX (2008) Induction of multiple fuzzy decision trees based on rough set technique. Inf Sci 178(16):3188–3202
Wang X, Chen A, Feng H (2011a) Upper integral network with extreme learning mechanism. Neurocomputing 74(16):2520–2525
Wang YG, Cao FL, Yuan YB (2011b) A study on effectiveness of extreme learning machine. Neurocomputing 74(16):2483–2490
Wang XZ, Dong LC, Yan JH (2011c) Maximum ambiguity based sample selection in fuzzy decision tree induction. IEEE Trans Knowl Data Eng. doi:10.1109/TKDE.2011.67
Woods K, Kegelmeyer WP, Bowyer K (1997) Combination of multiple classifiers using local accuracy estimates. IEEE Trans Pattern Anal Mach Intell 19(4):405–410
Wu J, Wang S, Fu-lai C (2011) Positive and negative fuzzy rule system, extreme learning machine and image classification. Int J Mach Learn Cybern 2(4):261–271
Wu X, Kumar V, Quinlan JR et al (2008) Top 10 algorithms in data mining. Knowl Inf Syst 14(1):1–37
Zhang S, McCullagh P, Nugent C, Zheng H, Baumgarten M (2011) Optimal model selection for posture recognition in home-based healthcare. Int J Mach Learn Cybern 1(2):1–14
Zhou ZH, Wu J, Tang W (2002) Ensembling neural networks: many could be better than all. Artif Intell 137(1–2):239–263
Acknowledgments
This research is supported by the national natural science foundation of China (61170040), by the natural science foundation of Hebei Province (F2010000323, F2011201063), by the Key Scientific Research Foundation of Education Department of Hebei Province (ZD2010139), and by the natural science foundation of Hebei University (2011-228).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Zhai, Jh., Xu, Hy. & Wang, Xz. Dynamic ensemble extreme learning machine based on sample entropy. Soft Comput 16, 1493–1502 (2012). https://doi.org/10.1007/s00500-012-0824-6
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00500-012-0824-6