Advertisement

Analyzing Random Forest Classifier with Different Split Measures

  • Vrushali Y. Kulkarni
  • Manisha Petare
  • P. K. Sinha
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 236)

Abstract

Random forest is an ensemble supervised machine learning technique. The principle of ensemble suggests that to yield better accuracy, the base classifiers in the ensemble should be diverse and accurate. Random forest uses decision tree as base classifier. In this paper, we have done theoretical and empirical comparison of different split measures for induction of decision tree in Random forest and tested if there is any effect on the accuracy of Random forest.

Keywords

Classification Split measures Random forest Decision tree 

References

  1. 1.
    Breiman, L.: Bagging predictors, Technical report No 421, September (1994).Google Scholar
  2. 2.
    Opitz, David: Maclin, richard: popular ensemble methods: an empirical study. J. Arti. Intel. 11, 169–198 (1999)MATHGoogle Scholar
  3. 3.
    Brieman, Leo: Random forests. Machine Learning. 45, 5–32 (2001)CrossRefGoogle Scholar
  4. 4.
    Sikonja, M.R.: Improving random forests. In: Boulicaut, J.F., et al. (eds): Machine Learning, ECML 2004 Proceedings, LNCS, vol. 3201, PP. 359–370, Springer, Berlin (2004).Google Scholar
  5. 5.
    Rokach, Lior: Maimon, oded: top-down induction of decision trees classifiers-a survey. IEEE trans. syst. man. cyber. part c: appli. rev. 35(4), 476–487 (2005).Google Scholar
  6. 6.
    Badulescu, L.A.: The choice of the best attribute selection measure in DecisionTree induction, Annals of University of Craiova, Math. Comp. Sci. Ser. Vol. 34 (1) (2007).Google Scholar
  7. 7.
    Mingers, J.: An empirical comparison of selection measures for decision tree induction. Mach. Learn. 3, 319–342 (1989)Google Scholar
  8. 8.
    Robnik-Sikonja, M., Kononenko, I.: Attribute dependencies, understandability, and split selection in tree based models, Machine Learning: Proceedings of the 6th International Conference (ICML), 344–353 (1999).Google Scholar
  9. 9.
    Brieman, Leo: Technical note-some properties of splitting criteria. Mach. Learn. 24, 41–47 (1996)Google Scholar
  10. 10.
    Han, J., Kamber, M.: Data mining: concepts and techniques, 2nd edn. Morgan Kaufmann Publisher, San Francisco (2006)Google Scholar
  11. 11.
    Buntine, Wray: Niblet, tim: a further comparison of splitting rules for decision tree induction. Mach. learn. 8, 75–85 (1992)Google Scholar
  12. 12.
    Kulkarni, V.Y., pradeep, K.S.: Random forest classifiers: a survey and future research directions. Int. J. Adv. Comput. ISSN 2051–0845. 36(1), 1144–1153 (2013).Google Scholar
  13. 13.
    Liu, W.Z., White, A.P.: The importance of attribute selection measures in decision tree induction. Mach. Learn. 15, 25–41 (1994)Google Scholar

Copyright information

© Springer India 2014

Authors and Affiliations

  • Vrushali Y. Kulkarni
    • 1
    • 2
  • Manisha Petare
    • 3
  • P. K. Sinha
    • 4
  1. 1.COEPPuneIndia
  2. 2.MITPuneIndia
  3. 3.MITPuneIndia
  4. 4.HPC and R&D, CDACPuneIndia

Personalised recommendations