Skip to main content

PAC Learning

  • Reference work entry
Encyclopedia of Machine Learning

Synonyms

Distribution-free learning; Probably approximately correct learning; PAC identification

Motivation and Background

A very important learning problem is the task of learning a concept. Concept learning has attracted much attention in learning theory. For having a running example, we look at humans who are able to distinguish between different “things,” e.g., chair, table, car, airplane, etc. There is no doubt that humans have to learn how to distinguish “things.” Thus, in this example, each concept is a thing. To model this learning task, we have to convert “real things” into mathematical descriptions of things. One possibility to do this is to fix some language to express a finitelist of properties. Afterward, we decide which of these properties are relevant for the particular things we want to deal with and which of them have to be fulfilled or not to be fulfilled, respectively. The list of properties comprises qualities or traits such as “has four legs,” “has wings,” “is...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Recommended Reading

  • Angluin, D. (1988). Queries and concept learning. Machine Learning, 2(4), 319–342.

    Google Scholar 

  • Angluin, D. (1992). Computational learning theory: Survey and selected bibliography. In Proceedings of the twenty-fourth annual ACM symposium on theory of computing (pp. 351–369). New York: ACM Press.

    Google Scholar 

  • Anthony, M., & Biggs, N. (1992). Computational learning theory: Cambridge tracts in theoretical computer science (No. 30). Cambridge: Cambridge University Press.

    Google Scholar 

  • Anthony, M., Biggs, N., & Shawe-Taylor, J. (1990). The learnability of formal concepts. In M. A. Fulk & J. Case (Eds.), Proceedings of the third annual workshop on computational learning theory (pp. 246–257). San Mateo, CA: Morgan Kaufmann.

    Google Scholar 

  • Arora, S., & Barak, B. (2009). Computational complexity: A modern approach. Cambridge: Cambridge University Press.

    MATH  Google Scholar 

  • Blumer, A., Ehrenfeucht, A., Haussler, D., & Warmuth, M. K. (1987). Occam’s razor. Information Processing Letters, 24(6), 377–380.

    Article  MathSciNet  MATH  Google Scholar 

  • Blumer, A., Ehrenfeucht, A., Haussler, D., & Warmuth, M. K. (1989). Learnability and the Vapnik-Chervonenkis dimension. Journal of the ACM, 36(4), 929–965.

    Article  MathSciNet  MATH  Google Scholar 

  • Ehrenfeucht, A., Haussler, D., Kearns, M., & Valiant, L. (1988). A general lower bound on the number of examples needed for learning. In D. Haussler & L. Pitt (Eds.), COLT ’88, Proceedings of the 1988 workshop on computational learning theory, August 3–5, 1988, MIT (pp. 139–154). San Francisco: Morgan Kaufmann.

    Google Scholar 

  • Haussler, D. (1987). Bias, version spaces and Valiant’s learning framework. In P. Langley (Ed.), Proceedings of the fourth international workshop on machine learning (pp. 324–336). San Mateo, CA: Morgan Kaufmann.

    Google Scholar 

  • Haussler, D., Kearns, M., Littlestone, N., & Warmuth, M. K. (1991). Equivalence of models for polynomial learnability. Information and Computation, 95(2), 129–161.

    Article  MathSciNet  MATH  Google Scholar 

  • Kearns, M. J., & Vazirani, U. V. (1994). An introduction to computational learning theory. Cambridge, MA: MIT Press.

    Google Scholar 

  • Linial, N., Mansour, Y., & Rivest, R. L. (1991). Results on learnability and the Vapnik–Chervonenkis dimension. Information and Computation, 90(1), 33–49.

    Article  MathSciNet  MATH  Google Scholar 

  • Littlestone, N. (1988). Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning, 2(4), 285–318.

    Google Scholar 

  • Maas, W., & Turan, G. (1990). On the complexity of learning from counterexamples and membership queries. In Proceedings of the thirty-first annual symposium on Foundations of Computer Science (FOCS 1990), St. Louis, Missouri, October 22–24, 1990 (pp. 203–210). Los Alamitos, CA: IEEE Computer Society.

    Google Scholar 

  • Natarajan, B. K. (1991). Machine learning: A theoretical approach. San Mateo, CA: Morgan Kaufmann.

    Google Scholar 

  • Pitt, L., & Valiant, L. G. (1988). Computational limitations on learning from examples. Journal of the ACM, 35(4), 965–984.

    Article  MathSciNet  MATH  Google Scholar 

  • Schapire, R. E. (1990). The strength of weak learnability. Machine Learning, 5(2), 197–227.

    Google Scholar 

  • Valiant, L. G. (1984). A theory of the learnable. Communications of the ACM, 27(11), 1134–1142.

    Article  MATH  Google Scholar 

  • Xiao, D. (2009). On basing { ZK}≠{ BPP} on the hardness of PAC learning. In Proceedings of the twenty-fourth annual IEEE Conference on Computational Complexity, CCC 2009, Paris, France, July 15–18, 2009 (pp. 304–315). Los Alamitos, CA: IEEE Computer Society.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer Science+Business Media, LLC

About this entry

Cite this entry

Zeugmann, T. (2011). PAC Learning. In: Sammut, C., Webb, G.I. (eds) Encyclopedia of Machine Learning. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-30164-8_625

Download citation

Publish with us

Policies and ethics