Advertisement

An Improved Model Selection Heuristic for AUC

  • Shaomin Wu
  • Peter Flach
  • Cèsar Ferri
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4701)

Abstract

The area under the ROC curve (AUC) has been widely used to measure ranking performance for binary classification tasks. AUC only employs the classifier’s scores to rank the test instances; thus, it ignores other valuable information conveyed by the scores, such as sensitivity to small differences in the score values However, as such differences are inevitable across samples, ignoring them may lead to overfitting the validation set when selecting models with high AUC. This problem is tackled in this paper. On the basis of ranks as well as scores, we introduce a new metric called scored AUC (sAUC), which is the area under the sROC curve. The latter measures how quickly AUC deteriorates if positive scores are decreased. We study the interpretation and statistical properties of sAUC. Experimental results on UCI data sets convincingly demonstrate the effectiveness of the new metric for classifier evaluation and selection in the case of limited validation data.

Keywords

Receiver Operating Characteristic Receiver Operating Characteristic Curve Test Instance Ranking Performance Negative Instance 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    DeLong, E.R., DeLong, D.M., Clarke-Pearson, D.L.: Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics 44, 837–845 (1988)zbMATHCrossRefGoogle Scholar
  2. 2.
    Ferri, C., Flach, P., Hernández-Orallo, J., Senad, A.: Modifying ROC curves to incorporate predicted probabilities. In: Proceedings of the Second Workshop on ROC Analysis in Machine Learning (ROCML 2005) (2005)Google Scholar
  3. 3.
    Fawcett, T.: Using Rule Sets to Maximize ROC Performance. In: Proc. IEEE Int’l Conf. Data Mining, pp. 131–138 (2001)Google Scholar
  4. 4.
    Fawcett, T.: An introduction to ROC analysis. Pattern Recognition Let. 27-8, 861–874 (2006)CrossRefGoogle Scholar
  5. 5.
    Hanley, J.A., McNeil, B.J.: The Meaning and Use of the AUC Under a Receiver Operating Characteristic (ROC) Curve. Radiology 143, 29–36 (1982)Google Scholar
  6. 6.
    Hsieh, F., Turnbull, B.W.: Nonparametric and Semiparametric Estimation of the Receiver Operating Characteristic Curve. Annals of Statistics 24, 25–40 (1996)zbMATHCrossRefMathSciNetGoogle Scholar
  7. 7.
    Huang, J., Ling, C.X.: Dynamic Ensemble Re-Construction for Better Ranking. In: Proc. 9th Eur. Conf. Principles and Practice of Knowledge Discovery in Databases, pp. 511–518 (2005)Google Scholar
  8. 8.
    Huang, J., Ling, C.X.: Using AUC and Accuray in Evaluating Learing Algorithms. IEEE Transactions on Knowledge and Data Engineering 17, 299–310 (2005)CrossRefGoogle Scholar
  9. 9.
    Provost, F., Fawcett, T., Kohavi, R.: Analysis and Visualization of Classifier Performance: Comparison Under Imprecise Class and Cost Distribution. In: Proc. 3rd Int’l Conf. Knowledge Discovery and Data Mining, pp. 43–48 (1997)Google Scholar
  10. 10.
    Provost, F., Fawcett, T.: Robust Classification for Imprecise Environments. Machine Learning 42, 203–231 (2001)zbMATHCrossRefGoogle Scholar
  11. 11.
    Provost, F., Domingos, P.: Tree Induction for Probability-Based Ranking. Machine Learning 52, 199–215 (2003)zbMATHCrossRefGoogle Scholar
  12. 12.
    Wu, S.M., Flach, P.: Scored Metric for Classifier Evaluation and Selection. In: Proceedings of the Second Workshop on ROC Analysis in Machine Learning (ROCML 2005) (2005)Google Scholar
  13. 13.
    Zhou, X.H., Obuchowski, N.A., McClish, D.K.: Statistical Methods in Diagnostic Medicine. John Wiley and Sons, Chichester (2002)zbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Shaomin Wu
    • 1
  • Peter Flach
    • 2
  • Cèsar Ferri
    • 3
  1. 1.Cranfield UniversityUnited Kingdom
  2. 2.University of BristolUnited Kingdom
  3. 3.Universitat Politècnica de ValènciaSpain

Personalised recommendations