Abstract
This chapter is concerned with estimating the performance of a classifier (of any kind). Three methods are described for estimating a classifier’s predictive accuracy. The first of these is to divide the data available into a training set used for generating the classifier and a test set used for evaluating its performance. The other methods are \(k\)-fold cross-validation and its extreme form \(N\)-fold (or leave-one-out) cross-validation.
A statistical measure of the accuracy of an estimate formed using any of these methods, known as standard error is introduced. Experiments to estimate the predictive accuracy of the classifiers generated for various datasets are described, including datasets with missing attribute values. Finally a tabular way of presenting classifier performance information called a confusion matrix is introduced, together with the notion of true and false positive and negative classifications.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Reference
Quinlan, J. R. (1979). Discovering rules by induction from large collections of examples. In D. Michie (Ed.), Expert systems in the micro-electronic age (pp. 168–201). Edinburgh: Edinburgh University Press.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer-Verlag London Ltd.
About this chapter
Cite this chapter
Bramer, M. (2016). Estimating the Predictive Accuracy of a Classifier. In: Principles of Data Mining. Undergraduate Topics in Computer Science. Springer, London. https://doi.org/10.1007/978-1-4471-7307-6_7
Download citation
DOI: https://doi.org/10.1007/978-1-4471-7307-6_7
Published:
Publisher Name: Springer, London
Print ISBN: 978-1-4471-7306-9
Online ISBN: 978-1-4471-7307-6
eBook Packages: Computer ScienceComputer Science (R0)