Abstract
Ensemble of classifiers constitutes one of the main current directions in machine learning and data mining. It is accepted that the ensemble methods can be divided into static and dynamic ones. Dynamic ensemble methods explore the use of different classifiers for different samples and therefore may get better generalization ability than static ensemble methods. However, for most of dynamic approaches based on KNN rule, additional part of training samples should be taken out for estimating “local classification performance” of each base classifier. When the number of training samples is not sufficient enough, it would lead to the lower accuracy of the training model and the unreliableness for estimating local performances of base classifiers, so further hurt the integrated performance. This paper presents a new dynamic ensemble model that introduces cross-validation technique in the process of local performances’ evaluation and then dynamically assigns a weight to each component classifier. Experimental results with 10 UCI data sets demonstrate that when the size of training set is not large enough, the proposed method can achieve better performances compared with some dynamic ensemble methods as well as some classical static ensemble approaches.
Similar content being viewed by others
References
Bi Y, Bell D, Wang H et al (2004) Combining multiple classifiers for text categorization using dempster-shafer theory of evidence. In: Torra V, Narukawa Y (eds) Proceedings of the 1st international conference on modeling decisions for artificial intelligence. Barcelona, pp 127–138
Dietterich TG (2002) Ensemble learning. The handbook of brain theory and neural networks, 2nd edn. MIT Press, Cambridge
Xu L, Krzyzak A, Suen CY (1992) Methods for combining multiple classifiers and their applications to handwriting recognition. IEEE Trans Syst Man Cybernet 23:418–435
Kim KM, Park JJ, Song YG et al (2004) Recognition of handwritten numerals using a combined classifier with hybrid features. In: Fred A, Caelli T, Duin RP W et al (eds) Proceedings of the 5th international conference on statistical techniques in pattern recognition. Lisbon, 992–1000
Oliveira LS, Morita M, Sabourin R (2006) Feature selection for ensembles applied to handwriting recognition. Int J Document Anal Recogn 8:262–279
Ko AHR, Sabourin R, Britto AS Jr (2008) From dynamic classifier selection to dynamic ensemble selection. Pattern Recogn 41:1718–1731
Geng X, Zhou ZH (2006) Image region selection and ensemble for face recognition. J Com Sci Technol 21:116–125
Sirlantzis K, Hoque S, Fairhurst MC (2008) Diversity in multiple classifier ensembles based on binary feature quantisation with application to face recognition. Appl Soft Comput 8:437–445
Heseltine T, Pears N, Austin J (2008) Three-dimensional face recognition using combinations of surface feature map subspace components. Image Vis Comput 26:382–396
Cappelli R, Maio D, Maltoni D (2002) A multi-classifier approach to fingerprint classification. Pattern Anal Appl 5:136–144
Nanni L, Lumini A (2006) Random bands: a novel ensemble for fingerprint matching. NeuroComputing 69:1702–1705
Shen HB, Chou KC (2007) Hum-mPLoc: an ensemble classifier for large-scale human protein subcellular location prediction by incorporating samples with multiple sites. Biochem Biophys Res Commun 355(4):1006–1011
Gu Q, Ding YS, Jiang XY, Zhang TL Prediction of subcellular location apoptosis proteins with ensemble classifier and feature selection. Amino Acids doi:10.1007/s00726-008-0209-4
Nanni L, Lumini A (2007) Ensemblator: an ensemble of classifiers for reliable classification of biological data. Pattern Recogn Lett 28:622–630
Lam L, Suen CY (1997) Application of majority voting to pattern recognition: an analysis of its behavior and performance. IEEE Trans Syst Man Cybernet 27:553–568
Lee DS, Srihari SN (1995) A theory of classifier combination: the neural network approach. In: Kavanaugh M, Storms M (eds) Proceedings of the 3rd international conference on document analysis and recognition. Montreal, pp 42–45
Schaffer C (1993) Selecting a classification method by cross-validation. Mach Learn 13:135–143
Woods K, Kegelmeyer WP, Bowyer K (1997) Combination of multiple classifiers using local accuracy estimates. IEEE Trans Pattern Anal Mach Intell 19:405–410
Giacinto G, Roli F (1997) Adaptive selection of image classifiers. In: Bimbo AD (ed) Proceedings of the 9th international conference on image analysis and processing. Florence, Italy, pp 38–45
Puuronen S, Terziyan V, Tsymbal A (1999) A dynamic integration algorithm for an ensemble of classifiers. In: Ras ZW, Skowron A (eds) Proceedings of the 11th international symposium on foundations of intelligent systems. Warsaw, pp 592–600
Giacinto G, Roli F (2000) Dynamic classifier selection. In: Goos G, Hartmanis J, van Leeuwen J (eds) Proceedings of the 1st international workshop on multiple classifier systems. Cagliari, pp 177–189
Giacinto G, Roli F (2001) Dynamic classifier selection based on multiple classifier behaviour. Pattern Recogn 34:1879–1881
Canuto AMP, Soares RGF, Santana A et al (2006) Using accuracy and diversity to select classifiers to build ensembles. In: Proceedings of international joint conference on neural networks. Vancouver, pp 2289–2295
de Souto M, Soares R, Santana A, Canuto A (2008) Empirical comparison of dynamic classifier selection methods based on diversity and accuracy for building ensembles. In: Proceedings of IEEE international joint conference on neural networks. HongKong, pp 1480–1487
Kuncheva LI. Cluster-and-selection model for classifier combination. In: Howlett RJ, Jain LC (eds) Proceedings of international conference on knowledge based intelligent engineering systems and allied technologies. University of Brighton, United Kingdom, pp 185–188
Liu R, Yuan B (2001) Multiple classifier combination by clustering and selection. Inf Fusion 2:163–168
Kuncheva LI (2002) Switching between selection and fusion in combining classifiers: an experiment. IEEE Trans Syst Man Cybernet-Part B 32:146–156
Zhu XQ, Wu XD, Yang Y (2004) Dynamic selection for effective mining from noisy data streams. In: Rastogi R, Morik K, Bramer M et al (eds) Proceedings of the 4th IEEE international conference on data mining. Brighton, pp 305–312
Singh S, Singh M (2005) A dynamic classifier selection and combination approach to image region labelling. Signal Process: Image Commun 20:219–231
Breiman L (1996) Bagging predictors. Mach Learn 24:123–140
Freund Y (1995) Boosting a weak algorithm by majority. Inf Comput 121:256–285
Ho TK (1998) The random subspace method for constructing decision forests. IEEE Trans Pattern Anal Mach Intell 20:832–844
Merz CJ, Murphy PM (1998) UCI repository of machine learning databases. Department of Information and Computer Science, University of California, Irvine, http://www.ics.uci.edu/~mlearn/MLRepository
Witten I, Frank E, et al (2007) Weka 3: data mining software in java. University of Waikato, Hamilton, New Zealand, http://www.cs.waikato.ac.nz/~ml/
Tsymbal A, Pechenizkiy M, Cunningham P (2005) Diversity in search strategies for ensemble feature selection. Inf Fusion 6(1):83–98
Mitchell T (1997) Machine learning. McGraw-Hill Companies, NY
Didaci L, Giacinto G, Roli F et al (2005) A study on the performances of dynamic classifier selection based on local accuracy estimation. Pattern Recogn 38:2188–2191
Witten IH, Frank E (2005) Data mining: practical machine learning tools and techniques, 2nd edn. Elsevier Inc., Amsterdam
Author information
Authors and Affiliations
Corresponding author
Additional information
Sponsored by Qing Lan Project of Jiangsu province, Innovation Fund for Small Technology-based Firms of China (No. 09C26213203797), National Natural Science Foundation of China (No. 70971067), the High-tech Research and Development Program of Jiangsu Province (No. BG2007028) and Natural Science Foundation of Jiangsu province (No. 08KJA520001).
Rights and permissions
About this article
Cite this article
Yu-Quan, Z., Ji-Shun, O., Geng, C. et al. Dynamic weighting ensemble classifiers based on cross-validation. Neural Comput & Applic 20, 309–317 (2011). https://doi.org/10.1007/s00521-010-0372-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-010-0372-x