Abstract
In general, the proportion of incorrectly classified samples to the total number of samples is called error rate, that is, if a out of m samples are misclassified, then the error rate is \(E=a/m\).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bradley AP (1997) The use of the area under the roc curve in the evaluation of machine learning algorithms. Pattern Recognit 30(7):1145–1159
Breiman L (1996) Bias, variance, and arcing classifiers. Technical Report 460, Statistics Department, University of California, CA
Demsar J (2006) Statistical comparison of classifiers over multiple data sets. J Mach Learn Res 7:1–30
Dietterich TG (1998) Approximate statistical tests for comparing supervised classification learning algorithms. Neural Comput 10(7):1895–1923
Domingos P (2000) A unified bias-variance decomposition. In: Proceedings of the 17th international conference on machine learning (ICML), pp 231–238. Stanford, CA
Drummond C, Holte RC (2006) Cost curves: an improved method for visualizing classifier performance. Mach Learn 65(1):95–130
Efron B, Tibshirani R (1993) An introduction to the bootstrap. Chapman & Hall, New York
Elkan C (2001) The foundations of cost-sensitive learning. In: Proceedings of the 17th international joint conference on artificial intelligence (IJCAI), pp 973–978. Seattle, WA
Fawcett T (2006) An introduction to roc analysis. Pattern Recognit Lett 27(8):861–874
Friedman JH (1997) On bias, variance, 0/1-loss, and the curse-of-dimensionality. Data Mining Knowl Disc 1(1):55–77
Geman S, Bienenstock E, Doursat R (1992) Neural networks and the bias/variance dilemma. Neural Comput 4(1):1–58
Hand DJ, Till RJ (2001) A simple generalisation of the area under the ROC curve for multiple class classification problems. Mach Learn 45(2):171–186
Hanley JA, McNeil BJ (1983) A method of comparing the areas under receiver operating characteristic curves derived from the same cases. Radiology 148(3):839–843
Kohavi R, Wolpert DH (1996) Bias plus variance decomposition for zero-one loss functions. In: Proceedings of the 13th international conference on machine learning (ICML), pp 275–283. Bari, Italy
Kong EB, Dietterich TG (1995) Error-correcting output coding corrects bias and variance. In: Proceedings of the 12th international conference on machine learning (ICML), pp 313–321. Tahoe City, CA
Mitchell T (1997) Machine learning. McGraw Hill, New York
Spackman KA (1989) Signal detection theory: valuable tools for evaluating inductive learning. In: Proceedings of the 6th international workshop on machine learning (IWML), pp 160–163. Ithaca, NY
Van Rijsbergen CJ (1979) Information retrieval, 2nd edn. Butterworths, London
Wellek S (2010) Testing statistical hypotheses of equivalence and noninferiority, 2nd edn. Chapman & Hall/CRC, Boca Raton
Zhou Z-H, Liu X-Y (2006) On multi-class cost-sensitive learning. In: Proceedings of the 21st national conference on artificial intelligence (AAAI), pp 567–572. Boston, WA
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2021 Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Zhou, ZH. (2021). Model Selection and Evaluation. In: Machine Learning. Springer, Singapore. https://doi.org/10.1007/978-981-15-1967-3_2
Download citation
DOI: https://doi.org/10.1007/978-981-15-1967-3_2
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-1966-6
Online ISBN: 978-981-15-1967-3
eBook Packages: Computer ScienceComputer Science (R0)