Proceedings of ELM-2015 Volume 1 pp 501-509 | Cite as
An Enhanced Extreme Learning Machine for Efficient Small Sample Classification
Abstract
ELM, as an efficient classification technology, is used to many popular application domain. However, ELM has weak generalization performance when the original data set is small related to its feature space. In this paper, aiming to the above problem, an enhanced ELM classification framework is proposed to improve the accuracy of ELM classifier. At first, the method automatically obtains the k discretization intervals for the continuous data and removes the irrelevant features and the redundancy features by mutual information. Further, we only select those features which have high relevance with the object node by an improved Markov Boundary identify algorithm. Finally, Obtaining the enhanced ELM classifier by an efficient weight voting mechanism. The experiments conducted on real-life small sample datasets demonstrate that the proposed framework outperforms the previous methods, especially for small sample data.
Keywords
Extreme Learning Machine Representative features Small sample dataNotes
Acknowledgments
Project supported by the National Nature Science Foundation of China (No. 61272182, 61100028, 61572117,61173030, 61173029), State Key Program of National Natural Science of China (61332014,U1401256), New Century Excellent Talents (NCET-11-0085) and Fundamental Research Funds for the Central Universities (N130504001).
References
- 1.Wang, Z., Zhao, Y., Wang, G., Li, Y., Wang, X.: On extending extreme learning machine to non-redundant synergy pattern based graph classification. Neurocomputing (IJON) 149, 330–339 (2015)CrossRefGoogle Scholar
- 2.Zhao, Y., Wang, G., Yin, Y., Li, Y., Wang, Z.: Improving ELM-based microarray data classification by diversified sequence features selection. Neural Comput. Appl. (2014). doi: 10.1007/s00521-014-1571-7
- 3.Sun, Y., Yuan, Y., Wang, G.: An on-line sequential learning method in social networks for node classification. Neurocomputing (IJON) 149, 207–214 (2015)CrossRefGoogle Scholar
- 4.Zhao, X., Wang, G., Bi, X., Gong, P., Zhao, Y.: XML document classification based on ELM. Neurocomputing (IJON) 74(16), 2444–2451 (2011)CrossRefGoogle Scholar
- 5.Xia, M., Weitao, L., Yang, J., Ma, Y., Yao, W., Zheng, Z.: A hybrid method based on extreme learning machine and k-nearest neighbor for cloud classification of ground-based visible cloud image. Neurocomputing (IJON) 160, 238–249 (2015)CrossRefGoogle Scholar
- 6.Cao, K., Wang, G., Han, D., Ning, J., Zhang, X.: Classification of uncertain data streams based on extreme learning machine. Cogn. Comput. (COGCOM) 7(1), 150–160 (2015)CrossRefGoogle Scholar
- 7.Huang, G.-B., Wang, D., Lan, Y.: Extreme learning machines: a survey. Int. J. Mach. Learn. Cybern. 2, 107–122 (2011). doi: 10.1007/s13042-011-0019-y CrossRefGoogle Scholar
- 8.Huang, G.-B., Ding, X., Zhou, H.: Optimization method based extreme learning machine for classification. Neurocomputing 74(1–3), 155–163 (2010)CrossRefGoogle Scholar
- 9.Li, M.-B., Huang, G.-B., Saratchandran, P., Sundararajan, N.: Fully complex extreme learning machine. Neurocomputing 68, 306–314 (2005)CrossRefGoogle Scholar
- 10.Wei, X., Li, Y., Feng, Y.: Comparative study of extreme learning machine and support vector machine. ISNN 1, 1089–1095 (2006)Google Scholar
- 11.Jun, W., Shitong, W., Chung, F.-L.: Positive and negative fuzzy rule system, extreme learning machine and image classification. Int. J. Mach. Learn. Cybern. 2, 261–271 (2011)CrossRefGoogle Scholar
- 12.Wang, X., Chen, A., Feng, H.: Upper integral network with extreme learning mechanism. Neurocomputing 74(16), 2520–2525 (2011)CrossRefGoogle Scholar
- 13.Chacko, B., Vimal Krishnan, V., Raju, G., Babu Anto, P.: Handwritten character recognition using wavelet energy and extreme learning machine. Int. J. Mach. Learn. Cybern. 3(2), 149–161 (2012)Google Scholar
- 14.Huang, G.-B., Siew, C.K.: Extreme learning machine: Rbf network case. In: ICARCV, pp. 1029–1036 (2004)Google Scholar
- 15.bin Huang, G., yu Zhu, Q., kheong Siew, C.: Extreme learning machine: theory and applications. Neurocomputing 70, 489–501 (2006)CrossRefGoogle Scholar
- 16.Liu, H., Motoda, H.: Computational Methods of Feature Selection. Chapman, Hall/CRC, Danvers (2007)Google Scholar
- 17.Souza, J.: Feature selection with a general hybrid algorithm. Ph.D., University of Ottawa, Ottawa, Ontario, Canada (2004)Google Scholar
- 18.Langley, P., Selection of relevant features in machine learning. In: Proceedings of the AAAI Fall Symposium on Relevance, pp 1-5 (1994)Google Scholar
- 19.Banerjee, S., Ghosal, S.: Bayesian structure learning in graphical models. J. Multivar. Anal. (MA) 136, 147–162 (2015)CrossRefMathSciNetMATHGoogle Scholar
- 20.Cortes, C., Vapnik, V.: Support vector networks. Mach. Learn. 20, 273–297 (1995)MATHGoogle Scholar
- 21.Feng, G., Huang, G.-B., Lin, Q., Gay, R.: Error minimized extreme learning machine with growth of hidden nodes and incremental learning. IEEE Trans. Neural Netw. 20(8), 1352–1357 (2009)CrossRefGoogle Scholar
- 22.Hall, M.A., Smith, L.A.: Feature selection for machine learning: comparing a correlation-based filter approach to the wrapper. In: Proceedings of the Twelfth International Florida Artificial intelligence Research Society Conference, pp 235–239 (1999)Google Scholar
- 23.Feng, D., Chen, F., Wenli, X.: Analysis of markov boundary induction in bayesian networks: a new view from matroid theory. Fundam. Inform. (FUIN) 107(4), 415–434 (2011)MATHGoogle Scholar
- 24.Rong, H.-J., Huang, G.-B., Ong, Y.-S.: Extreme learning machine for multi- categories classification applications. IEEE Int. Jt. Conf. Neural Netw. 1709–1713 (2008)Google Scholar