Abstract
As they say, nothing is more practical than a good theory. And indeed, mathematical models of learnability have helped improve our understanding of what it takes to induce a useful classifier from data, and, conversely, why the outcome of a machine-learning undertaking so often disappoints. And so, even though this textbook does not want to be mathematical, it cannot help introducing at least the basic concepts of the computational learning theory.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
The reader will have noticed that all these requirements are satisfied by the “pies” domain from Chap. 1
- 2.
Recall that in the “pies” domain from Chap. 1, the size of the hypothesis space was | H | = 2108. Of these hypotheses, 296 classified correctly the entire training set.
- 3.
For instance, a variation of the hill-climbing search from Sect. 1.2 might be used to this end.
- 4.
Analysis of situations where these requirements are not satisfied would go beyond the scope of an introductory textbook.
References
Blumer, W., Ehrenfeucht, A., Haussler, D., & Warmuth, M. K. (1989). Learnability and the Vapnik-Chervonenkis dimension. Journals of the ACM, 36, 929–965.
Cover, T. M. (1965). Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition. IEEE Transactions on Electronic Computers, EC-14, 326–334.
Kearns, M. J. & Vazirani, U. V. (1994). An introduction to computational learning theory. Cambridge, MA: MIT Press.
Shawe-Taylor, J., Anthony, M., & Biggs, N. (1993). Bounding sample size with the Vapnik-Chervonenkis dimension. Discrete Applied Mthematics, 42(1), 65–73.
Valiant, L. G. (1984). A theory of the learnable. Communications of the ACM, 27, 1134–1142.
Vapnik, V. N. (1992). Estimation of dependences based on empirical data. New York: Springer.
Vapnik, V. N. & Chervonenkis, A. Y. (1971). On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications, 16, 264–280.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this chapter
Cite this chapter
Kubat, M. (2017). Computational Learning Theory. In: An Introduction to Machine Learning. Springer, Cham. https://doi.org/10.1007/978-3-319-63913-0_7
Download citation
DOI: https://doi.org/10.1007/978-3-319-63913-0_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-63912-3
Online ISBN: 978-3-319-63913-0
eBook Packages: Computer ScienceComputer Science (R0)