Abstract
Decision forests are a type of classification paradigm which combines a collection of decision trees for a classification task, instead of depending on a single tree. Improvement of accuracy and stability is observed in experiments and applications. Some novel techniques to construct decision forests are proposed based on rough set reduction in this paper. As there are a lot of reducts for some data sets, a series of decision trees can be trained with different reducts. Three methods to select decision trees or reducts are presented, and decisions from selected trees are fused with the plurality voting rule. The experiments show that random selection is the worst solution in the proposed methods. It is also found that input diversity maximization doesn’t guarantee output diversity maximization. Hence it cannot guarantee a good classification performance in practice. Genetic algorithm based selective rough decision forests consistently get good classification accuracies compared with a single tree trained by raw data as well as the other two forest constructing methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Ho, T.K.: Random decision forests. In: 3rd international conference of document analysis and recognition, pp. 278–282
Ho, T.K.: The random subspace method for constructing decision forests. IEEE Transactions on pattern analysis ans machine intelligence 20, 832–844 (1998)
Breiman, L.: Bagging predictors. Machine Learning 26, 123–140 (1996)
Breiman, L.: Random forests. Machine learning 45, 5–32 (2001)
Tong, W., Hong, H., Fang, H., Xie, Q., Perkins, R.: Decision forest: combining the predictions of multiple independent decision tree models. Journal of chemical information ans computer sciences 43, 525–531 (2003)
Gunter, S., Bunke, H.: Handwritten word recognition using classifier ensembles generated from multiple prototypes. International journal of pattern recognition and artificial intelligence 18, 957–974 (2004)
Schettini, R.: Automatic classification of digital photographs based on decision forests. International journal of pattern recognition and artificial intelligence 18, 819–845 (2004)
Cheng, J., Liu, Q., Lu, H., et al.: Random independent subspace for face recognition. In: Negoita, M.G., Howlett, R.J., Jain, L.C. (eds.) KES 2004. LNCS (LNAI), vol. 3214, pp. 352–358. Springer, Heidelberg (2004)
Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on -line learning and application to boosting. In: The 2nd European conference on computational learning theory, pp. 23–37 (1995)
Dietterich, T.: An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting and randomization. Machine Learning, 1–22 (1998)
Gunter, S., Bunke, H.: Feature selection algorithms for the generation of multiple classifier systems and their application to handwritten word recognition. Pattern recognition letters 25, 1323–1336 (2004)
Swiniarski, R., Skowron, A.: Rough set methods in feature selection and recognition. Pattern recognition letters 24, 833–849 (2003)
Zhou, Z., Wu, J., Tang, W.: Ensembling neural networks: Many could be better than all. Artificial intelligence 137, 239–263 (2002)
Hu, Q., Yu, D., Bao, W.: Combining multiple neural networks for classification based on rough set reduction. In: Proceedings of the 2003 International Conference on Neural Networks and Signal processing, vol. 1, pp. 543–548 (2003)
Hu, Q., Yu, D.: Entropies of fuzzy indiscernibility relation and its operations. International Journal of uncertainty, fuzziness and knowledge based systems 12, 575–589 (2004)
Kittler, J., Hatef, M., Duin, R., et al.: On combining classifiers. IEEE Transactions on pattern analysis ans machine intelligence 20, 226–239 (1998)
Wang, J., Miao, D.: Analysis on attribute reduction strategies of rough set. Journal of computer science and technology 13, 189–193 (1998)
Wu, Q., Bell, D., Mcginnity, M.: Multiknowledge for decision making. Knowledge and information systems 7, 246–266 (2005)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Hu, QH., Yu, DR., Wang, MY. (2005). Constructing Rough Decision Forests. In: Ślęzak, D., Yao, J., Peters, J.F., Ziarko, W., Hu, X. (eds) Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing. RSFDGrC 2005. Lecture Notes in Computer Science(), vol 3642. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11548706_16
Download citation
DOI: https://doi.org/10.1007/11548706_16
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-28660-8
Online ISBN: 978-3-540-31824-8
eBook Packages: Computer ScienceComputer Science (R0)