Skip to main content

Constructing Rough Decision Forests

  • Conference paper
Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing (RSFDGrC 2005)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 3642))

Abstract

Decision forests are a type of classification paradigm which combines a collection of decision trees for a classification task, instead of depending on a single tree. Improvement of accuracy and stability is observed in experiments and applications. Some novel techniques to construct decision forests are proposed based on rough set reduction in this paper. As there are a lot of reducts for some data sets, a series of decision trees can be trained with different reducts. Three methods to select decision trees or reducts are presented, and decisions from selected trees are fused with the plurality voting rule. The experiments show that random selection is the worst solution in the proposed methods. It is also found that input diversity maximization doesn’t guarantee output diversity maximization. Hence it cannot guarantee a good classification performance in practice. Genetic algorithm based selective rough decision forests consistently get good classification accuracies compared with a single tree trained by raw data as well as the other two forest constructing methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Ho, T.K.: Random decision forests. In: 3rd international conference of document analysis and recognition, pp. 278–282

    Google Scholar 

  2. Ho, T.K.: The random subspace method for constructing decision forests. IEEE Transactions on pattern analysis ans machine intelligence 20, 832–844 (1998)

    Article  Google Scholar 

  3. Breiman, L.: Bagging predictors. Machine Learning 26, 123–140 (1996)

    Google Scholar 

  4. Breiman, L.: Random forests. Machine learning 45, 5–32 (2001)

    Article  MATH  Google Scholar 

  5. Tong, W., Hong, H., Fang, H., Xie, Q., Perkins, R.: Decision forest: combining the predictions of multiple independent decision tree models. Journal of chemical information ans computer sciences 43, 525–531 (2003)

    Google Scholar 

  6. Gunter, S., Bunke, H.: Handwritten word recognition using classifier ensembles generated from multiple prototypes. International journal of pattern recognition and artificial intelligence 18, 957–974 (2004)

    Article  Google Scholar 

  7. Schettini, R.: Automatic classification of digital photographs based on decision forests. International journal of pattern recognition and artificial intelligence 18, 819–845 (2004)

    Article  Google Scholar 

  8. Cheng, J., Liu, Q., Lu, H., et al.: Random independent subspace for face recognition. In: Negoita, M.G., Howlett, R.J., Jain, L.C. (eds.) KES 2004. LNCS (LNAI), vol. 3214, pp. 352–358. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  9. Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on -line learning and application to boosting. In: The 2nd European conference on computational learning theory, pp. 23–37 (1995)

    Google Scholar 

  10. Dietterich, T.: An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting and randomization. Machine Learning, 1–22 (1998)

    Google Scholar 

  11. Gunter, S., Bunke, H.: Feature selection algorithms for the generation of multiple classifier systems and their application to handwritten word recognition. Pattern recognition letters 25, 1323–1336 (2004)

    Article  Google Scholar 

  12. Swiniarski, R., Skowron, A.: Rough set methods in feature selection and recognition. Pattern recognition letters 24, 833–849 (2003)

    Article  MATH  Google Scholar 

  13. Zhou, Z., Wu, J., Tang, W.: Ensembling neural networks: Many could be better than all. Artificial intelligence 137, 239–263 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  14. Hu, Q., Yu, D., Bao, W.: Combining multiple neural networks for classification based on rough set reduction. In: Proceedings of the 2003 International Conference on Neural Networks and Signal processing, vol. 1, pp. 543–548 (2003)

    Google Scholar 

  15. Hu, Q., Yu, D.: Entropies of fuzzy indiscernibility relation and its operations. International Journal of uncertainty, fuzziness and knowledge based systems 12, 575–589 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  16. Kittler, J., Hatef, M., Duin, R., et al.: On combining classifiers. IEEE Transactions on pattern analysis ans machine intelligence 20, 226–239 (1998)

    Article  Google Scholar 

  17. Wang, J., Miao, D.: Analysis on attribute reduction strategies of rough set. Journal of computer science and technology 13, 189–193 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  18. Wu, Q., Bell, D., Mcginnity, M.: Multiknowledge for decision making. Knowledge and information systems 7, 246–266 (2005)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Hu, QH., Yu, DR., Wang, MY. (2005). Constructing Rough Decision Forests. In: Ślęzak, D., Yao, J., Peters, J.F., Ziarko, W., Hu, X. (eds) Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing. RSFDGrC 2005. Lecture Notes in Computer Science(), vol 3642. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11548706_16

Download citation

  • DOI: https://doi.org/10.1007/11548706_16

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-28660-8

  • Online ISBN: 978-3-540-31824-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics