Abstract
Performance evaluation of classifiers is a crucial step for selecting the best classifier or the best set of parameters for a classifier. The misclassification rate of a classifier is often too simple because it does not take into account that misclassification for different classes might have more or less serious consequences. On the other hand, it is often difficult to specify exactly the consequences or costs of misclassifications. ROC and AUC analysis try to overcome these problems, but have their own disadvantages and even inconsistencies. We propose a visualisation technique for classifier performance evaluation and comparison that avoids the problems of ROC and AUC analysis.
Keywords
- Receiver Operating Characteristic
- Receiver Operating Characteristic Curve
- Pareto Front
- Optimal Threshold
- Area Under Curve
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Hand, D.: Measuring classifier performance: a coherent alternative to the area under the ROC curve. Machine Learning 77, 103–123 (2009)
Kohavi, R.: A study of cross-validation and bootstrap for accuracy estimation and model selection. In: Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence, pp. 1137–1143. Morgan Kaufmann, San Mateo (1995)
Obuchowsky, N., Lieber, M., Wians Jr., F.: ROC curves in clinical chemistry: Uses, misuses and possible solutions. Clinical Chemistry 50, 1118–1125 (2004)
Søreide, K.: Receiver-operating characteristic (ROC) curve analysis in diagnostic, prognostic and predictive biomarker research. Clinical Pathology 62, 1–5 (2009)
Berthold, M., Borgelt, C., Höppner, F., Klawonn, F.: Guide to Intelligent Data Analysis: How to Intelligently Make Sense of Real Data. Springer, London (2010)
Hand, D., Mannila, H., Smyth, P.: Principles of Data Mining. MIT Press, Cambridge (2001)
Provost, F., Fawcett, T., Kohavi, R.: The case against accuracy estimation for comparing induction algorithms. In: Proceedings of the 15th International Conference on Machine Learning (1998)
Mossman, D.: Three-way ROCs. Medical Decision Making 19, 78–89 (1999)
Hand, D., Till, R.: A simple generalisation of the area under the ROC curve for multiple class classification problems. Machine Learning 45, 171–186 (2001)
Li, J., Fine, J.: ROC analysis with multiple classes and multiple tests: methodology and its application in microarray studies. Biostatistics 9, 566–576 (2008)
Adams, N., Hand, D.: Comparing classifiers when the misallocation costs are uncertain. Pattern Recognition 32, 1139–1147 (1999)
Drummond, C., Holte, R.: Explicitly representing expected cost: An alternative to ROC representation. In: Proc. Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 198–207. ACM Press, New York (2000)
Drummond, C., Holte, R.: Cost curves: An improved method for visualizing classifier performance. Machine Learning 65, 95–130 (2006)
Hernández-Orallo, J., Flach, P., Ferri, C.: Brier curves: a new cost-based visualisation of classifier performance. In: Getoor, L., Scheffer, T. (eds.) Proc. 28th International Conference on Machine Learning (ICML 2011), pp. 585–592. ACM, New York (2011)
Turney, P.: Cost-sensitive classification: Empirical evaluation of a hybrid genetic decision tree induction algorithm. Journal of Artificial Intelligence Research 2, 369–409 (1995)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Klawonn, F., Höppner, F., May, S. (2011). An Alternative to ROC and AUC Analysis of Classifiers. In: Gama, J., Bradley, E., Hollmén, J. (eds) Advances in Intelligent Data Analysis X. IDA 2011. Lecture Notes in Computer Science, vol 7014. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-24800-9_21
Download citation
DOI: https://doi.org/10.1007/978-3-642-24800-9_21
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-24799-6
Online ISBN: 978-3-642-24800-9
eBook Packages: Computer ScienceComputer Science (R0)