Skip to main content

Computational Learning Theory

  • Chapter
  • First Online:
An Introduction to Machine Learning
  • 499k Accesses

Abstract

As they say, nothing is more practical than a good theory. And indeed, mathematical models of learnability have helped improve our understanding of what it takes to induce a useful classifier from data, and, conversely, why the outcome of a machine-learning undertaking so often disappoints. And so, even though this textbook does not want to be mathematical, it cannot help introducing at least the basic concepts of the computational learning theory.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The reader will have noticed that all these requirements are satisfied by the “pies” domain from Chap. 1

  2. 2.

    Recall that in the “pies” domain from Chap. 1, the size of the hypothesis space was | H | = 2108. Of these hypotheses, 296 classified correctly the entire training set.

  3. 3.

    For instance, a variation of the hill-climbing search from Sect. 1.2 might be used to this end.

  4. 4.

    Analysis of situations where these requirements are not satisfied would go beyond the scope of an introductory textbook.

References

  1. Blumer, W., Ehrenfeucht, A., Haussler, D., & Warmuth, M. K. (1989). Learnability and the Vapnik-Chervonenkis dimension. Journals of the ACM, 36, 929–965.

    Article  MathSciNet  MATH  Google Scholar 

  2. Cover, T. M. (1965). Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition. IEEE Transactions on Electronic Computers, EC-14, 326–334.

    Article  MATH  Google Scholar 

  3. Kearns, M. J. & Vazirani, U. V. (1994). An introduction to computational learning theory. Cambridge, MA: MIT Press.

    Google Scholar 

  4. Shawe-Taylor, J., Anthony, M., & Biggs, N. (1993). Bounding sample size with the Vapnik-Chervonenkis dimension. Discrete Applied Mthematics, 42(1), 65–73.

    Article  MathSciNet  MATH  Google Scholar 

  5. Valiant, L. G. (1984). A theory of the learnable. Communications of the ACM, 27, 1134–1142.

    Article  MATH  Google Scholar 

  6. Vapnik, V. N. (1992). Estimation of dependences based on empirical data. New York: Springer.

    MATH  Google Scholar 

  7. Vapnik, V. N. & Chervonenkis, A. Y. (1971). On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications, 16, 264–280.

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this chapter

Cite this chapter

Kubat, M. (2017). Computational Learning Theory. In: An Introduction to Machine Learning. Springer, Cham. https://doi.org/10.1007/978-3-319-63913-0_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-63913-0_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-63912-3

  • Online ISBN: 978-3-319-63913-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics