Feature Weighted Ensemble Classifiers – A Modified Decision Scheme
In order to determine the output from an aggregated classifier a number of methods exists. A common approach is to apply the majority-voting scheme. If the performance of the classifiers can be ranked in some intelligent way, the voting process can be modified by assigning individual weights to each of the ensemble members. For some base classifiers, like decision trees, a given node or leaf is activated if the input lies within a well-defined region in input space. In other words, each leaf-node can be considered as defining a given feature in input space. In this paper, we present a method for adjusting the voting process of an ensemble by assigning individual weights to this set of features, implying that different nodes of the same decision tree can contribute differently to the overall voting process. By using a randomised “look-up technique” for the training examples the weights used in the decision process is determined using a perceptron-like learning rule. We present results obtained by applying such a technique to bagged ensembles of C4.5 trees and to the socalled PERT classifier, which is an ensemble of highly randomised decision trees. The proposed technique is compared to the majority-voting scheme on a number of data sets.
KeywordsLeaf Node Input Space Ensemble Member Individual Classifier Ensemble Classifier
Unable to display preview. Download preview PDF.
- 1.Kittler, J. and Roli, F. (eds.): Multiple Classifier Systems, First International Workshop, MCS 2000, Cagliari, Italy, June 2000. Proceedings, Berlin: Springer, (2000)Google Scholar
- 5.Dietterich, T.G. and Kong, E.B.: Machine Learning Bias, Statistical Bias, and Statistical Variance of Decision Tree Algorithms, technical report, Department of Computer Science, Oregon State University. (1995)Google Scholar
- 10.Jørgensen, T.M. and Linneberg, C.: Boosting the performance of weightless neural networks by using a post-processing transformation of the output scores, International Joint Conference on Neural Networks. IEEE. Washington, DC. (1999) paper no 126Google Scholar
- 11.Breiman, L: Some infinite theory for predictor ensembles, Technical report no 577, University of California. Berkerley, California. (2000)Google Scholar
- 12.Cutler, A. and Zhao, G.: Fast Classification Using Perfect Random Trees, Technical report no 5/99/99, Department of Mathematics and Statistics, Utah State University. (1999)Google Scholar
- 13.Cutler, A and Zhao, G.: Voting Perfect Random Trees, Technical report no 5/00/100, Department of Mathematics and Statistics, Utah State University. (2000)Google Scholar
- 14.Linneberg, C. and Jørgensen, T.M.; Improved Decision Scheme for Ensembles of Randomised Decision Trees submitted for publication (2001)Google Scholar
- 15.Jørgensen, T.M., Christensen, S.S. and Liisberg, C.: Crossvalidation and information measures for RAM based neural networks. In: Austin, J. (ed.): RAM-Based Neural Networks, Singapore: World Scientific, (1998) 78–88.Google Scholar
- 19.Michie, D., Spiegelhalter, D.J. and Tayler, C.C.: Machine Learning, Neural and Statistical Classification, Out of print, available at http://www.amsta.leeds.ac.uk/~charles/statlog/ Prentice-Hall, (1994)