Abstract
As the name suggests, computational learning theory is about ‘‘learning”’ by ‘‘computation” and is the theoretical foundation of machine learning. It aims to analyze the difficulties of learning problems, provides theoretical guarantees for learning algorithms, and guides the algorithm design based on theoretical analysis.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bartlett PL, Mendelson S (2002) Rademacher and Gaussian complexities: risk bounds and structural results. J Mach Learn Res 3:463–482
Bartlett PL, Bousquet O, Mendelson S (2002). Localized rademacher complexities. Sydney, Australia, pp 44–58
Ben-David S, Cesa-Bianchi N, Haussler D, Long PM (1995) Characterizations of learnability for classes of \(\{0,\dots, n\}\)-valued functions. J Comput Syst Sci 50(1):74–86
Bousquet O, Elisseeff A (2002) Stability and generalization. J Mach Learn Res 2:499–526
Devroye L, Gyorfi L, Lugosi G (eds) (1996) A probabilistic theory of pattern recognition. Springer, New York
Hoeffding W (1963) Probability inequalities for sums of bounded random variables. J Am Stat Assoc 58(301):13–30
Kearns MJ, Vazirani UV (1994) An introduction to computational learning theory. MIT Press, Cambridge
Koltchinskii V, Panchenko D (2000) Rademacher processes and bounding the risk of function learning. In: Gine E, Mason DM, Wellner JA (eds) High dimensional probability II. Birkhäuser Boston, Cambridge, pp 443–457
McDiarmid C (1989) On the method of bounded differences. Surv Comb 141(1):148–188
Mohri M, Rostamizadeh A, Talwalkar A (2012) Foundations of machine learning. MIT Press, Cambridge
Mukherjee S, Niyogi P, Poggio T, Rifkin RM (2006) Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization. Adv Comput Math 25(1–3):161–193
Natarajan BK (1989) On learning sets and functions. Mach Learn 4(1):67–97
Sauer N (1972) On the density of families of sets. J Comb Theory - Ser A 13(1):145–147
Shalev-Shwartz S, Shamir O, Srebro N, Sridharan K (2010) Learnability, stability and uniform convergence. J Mach Learn Res 11:2635–2670
Shelah S (1972) A combinatorial problem; stability and order for models and theories in infinitary languages. Pac J Math 41(1):247–261
Valiant LG (1984) A theory of the learnable. Commun ACM 27(11):1134–1142
Vapnik VN, Chervonenkis A (1971) On the uniform convergence of relative frequencies of events to their probabilities. Theory Probab Its Appl 16(2):264–280
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2021 Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Zhou, ZH. (2021). Computational Learning Theory. In: Machine Learning. Springer, Singapore. https://doi.org/10.1007/978-981-15-1967-3_12
Download citation
DOI: https://doi.org/10.1007/978-981-15-1967-3_12
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-1966-6
Online ISBN: 978-981-15-1967-3
eBook Packages: Computer ScienceComputer Science (R0)