Advertisement

Evaluating Decision Trees Grown with Asymmetric Entropies

  • Simon Marcellin
  • Djamel A. Zighed
  • Gilbert Ritschard
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4994)

Abstract

We propose to evaluate the quality of decision trees grown on imbalanced datasets with a splitting criterion based on an asymmetric entropy measure. To deal with the class imbalance problem in machine learning, especially with decision trees, different authors proposed such asymmetric splitting criteria. After the tree is grown a decision rule has to be assigned to each leaf. The classical Bayesian rule that selects the more frequent class is irrelevant when the dataset is strongly imbalanced. A best suited assignment rule taking asymmetry into account must be adopted. But how can we then evaluate the resulting prediction model? Indeed the usual error rate is irrelevant when the classes are strongly imbalanced. Appropriate evaluation measures are required in such cases. We consider ROC curves and recall/precision graphs for evaluating the performance of decision trees grown from imbalanced datasets. These evaluation criteria are used for comparing trees obtained with an asymmetric splitting criterion with those grown with a symmetric one. In this paper we only consider the cases involving 2 classes.

Keywords

Decision Tree Association Rule Minority Class Entropy Measure Splitting Criterion 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Provost, F.: Learning with imbalanced data sets. In: AAAI 2000 Workshop on Imbalanced Data Sets (2000)Google Scholar
  2. 2.
    Barandela, R., Sánchez, J.S., García, V., Rangel, E.: Strategies for learning in class imbalance problems. Pattern Recognition 36(3), 849–851 (2003)CrossRefGoogle Scholar
  3. 3.
    Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J.: Classification And Regression Trees. Chapman and Hall, New York (1984)zbMATHGoogle Scholar
  4. 4.
    Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo (1993)Google Scholar
  5. 5.
    Zighed, D.A., Marcellin, S., Ritschard, G.: Mesure d’entropie asymétrique et consistante. In: EGC, pp. 81–86 (2007)Google Scholar
  6. 6.
    Rényi, A.: On measures of entropy and information. In: 4th Berkely Symp. Math. Statist. Probability, vol. 1, pp. 547–561 (1960)Google Scholar
  7. 7.
    Aczel, J., Daroczy, Z.: On measures of information and their characterizations (1975)Google Scholar
  8. 8.
    Zighed, D., Rakotomalala, R.: Graphe d’induction Apprentissage et Data Mining. Hermés, Paris (2000)Google Scholar
  9. 9.
    Lallich, S., Lenca, P., Vaillant, B.: Construction d’une entropie décentrée pour l’apprentissage supervisé. In: 3éme Atelier Qualité des Connaissances á partir des Données (QDC-EGC 2007), Namur, Belgique, pp. 45–54 (2007)Google Scholar
  10. 10.
    Thomas, J., Jouve, P.E., Nicoloyannis, N.: Mesure non symétrique pour l’évaluation de modéles, utilisation pour les jeux de données déséquilibrés. In: 3éme Atelier Qualité des Connaissances á partir des Données (QDC-EGC 2007), Namur, Belgique (2007)Google Scholar
  11. 11.
    Marcellin, S., Zighed, D., Ritschard, G.: An asymmetric entropy measure for decision trees. In: 11th Information Processing and Management of Uncertainty in knowledge-based systems (IPMU 2006), Paris, France, pp. 1292–1299 (2006)Google Scholar
  12. 12.
    Ritschard, G., Zighed, D., Marcellin, S.: Données déséquilibrées, entropie décentrée et indice d’implication. In: Gras, R., Orús, P., Pinaud, B., Gregori, P. (eds.) Nouveaux apports théoriques á l’analyse statistique implicative et applications (actes des 4émes rencontres ASI4, Castellón de la Plana (Espan̈a), Departament de Matemàtiques, Universitat Jaume, October 18-21, 2007, vol. I, pp. 315–327 (2007)Google Scholar
  13. 13.
    Egan, J.: Signal detection theory and roc analysis. Series in Cognition and Perception (1975)Google Scholar
  14. 14.
    Fawcett, T.: An introduction to roc analysis. Pattern Recognition Letter 27(8), 861–874 (2006)CrossRefMathSciNetGoogle Scholar
  15. 15.
    Hettich, S., Bay, S.D.: The uci kdd archive (1999)Google Scholar
  16. 16.
    Chen, C., Liaw, A., Breiman, L.: Using random forest to learn imbalanced data (2004)Google Scholar
  17. 17.
    Chai, X., Deng, L., Yang, Q.: Ling: Test-cost sensitive naive bayes classification. In: ICDM (2005)Google Scholar
  18. 18.
    Sebastiani, F.: Machine learning in automated text categorization. ACM Comput. Surv. 34(1), 1–47 (2002)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Simon Marcellin
    • 1
  • Djamel A. Zighed
    • 1
  • Gilbert Ritschard
    • 2
  1. 1.Laboratoire ERIC, Bât LUniversité Lumiére Lyon 2BronFrance
  2. 2.Département d’économétrieUniversité de GenèveGeneva 4Switzerland

Personalised recommendations