Skip to main content

Model Selection and Evaluation

  • Chapter
  • First Online:
Machine Learning

Abstract

In general, the proportion of incorrectly classified samples to the total number of samples is called error rate, that is, if a out of m samples are misclassified, then the error rate is \(E=a/m\).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 64.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Bradley AP (1997) The use of the area under the roc curve in the evaluation of machine learning algorithms. Pattern Recognit 30(7):1145–1159

    Article  Google Scholar 

  • Breiman L (1996) Bias, variance, and arcing classifiers. Technical Report 460, Statistics Department, University of California, CA

    Google Scholar 

  • Demsar J (2006) Statistical comparison of classifiers over multiple data sets. J Mach Learn Res 7:1–30

    MathSciNet  MATH  Google Scholar 

  • Dietterich TG (1998) Approximate statistical tests for comparing supervised classification learning algorithms. Neural Comput 10(7):1895–1923

    Article  Google Scholar 

  • Domingos P (2000) A unified bias-variance decomposition. In: Proceedings of the 17th international conference on machine learning (ICML), pp 231–238. Stanford, CA

    Google Scholar 

  • Drummond C, Holte RC (2006) Cost curves: an improved method for visualizing classifier performance. Mach Learn 65(1):95–130

    Article  Google Scholar 

  • Efron B, Tibshirani R (1993) An introduction to the bootstrap. Chapman & Hall, New York

    Book  Google Scholar 

  • Elkan C (2001) The foundations of cost-sensitive learning. In: Proceedings of the 17th international joint conference on artificial intelligence (IJCAI), pp 973–978. Seattle, WA

    Google Scholar 

  • Fawcett T (2006) An introduction to roc analysis. Pattern Recognit Lett 27(8):861–874

    Article  MathSciNet  Google Scholar 

  • Friedman JH (1997) On bias, variance, 0/1-loss, and the curse-of-dimensionality. Data Mining Knowl Disc 1(1):55–77

    Article  Google Scholar 

  • Geman S, Bienenstock E, Doursat R (1992) Neural networks and the bias/variance dilemma. Neural Comput 4(1):1–58

    Article  Google Scholar 

  • Hand DJ, Till RJ (2001) A simple generalisation of the area under the ROC curve for multiple class classification problems. Mach Learn 45(2):171–186

    Article  Google Scholar 

  • Hanley JA, McNeil BJ (1983) A method of comparing the areas under receiver operating characteristic curves derived from the same cases. Radiology 148(3):839–843

    Article  Google Scholar 

  • Kohavi R, Wolpert DH (1996) Bias plus variance decomposition for zero-one loss functions. In: Proceedings of the 13th international conference on machine learning (ICML), pp 275–283. Bari, Italy

    Google Scholar 

  • Kong EB, Dietterich TG (1995) Error-correcting output coding corrects bias and variance. In: Proceedings of the 12th international conference on machine learning (ICML), pp 313–321. Tahoe City, CA

    Google Scholar 

  • Mitchell T (1997) Machine learning. McGraw Hill, New York

    MATH  Google Scholar 

  • Spackman KA (1989) Signal detection theory: valuable tools for evaluating inductive learning. In: Proceedings of the 6th international workshop on machine learning (IWML), pp 160–163. Ithaca, NY

    Google Scholar 

  • Van Rijsbergen CJ (1979) Information retrieval, 2nd edn. Butterworths, London

    MATH  Google Scholar 

  • Wellek S (2010) Testing statistical hypotheses of equivalence and noninferiority, 2nd edn. Chapman & Hall/CRC, Boca Raton

    Book  Google Scholar 

  • Zhou Z-H, Liu X-Y (2006) On multi-class cost-sensitive learning. In: Proceedings of the 21st national conference on artificial intelligence (AAAI), pp 567–572. Boston, WA

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhi-Hua Zhou .

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Zhou, ZH. (2021). Model Selection and Evaluation. In: Machine Learning. Springer, Singapore. https://doi.org/10.1007/978-981-15-1967-3_2

Download citation

  • DOI: https://doi.org/10.1007/978-981-15-1967-3_2

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-15-1966-6

  • Online ISBN: 978-981-15-1967-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics