Encyclopedia of Machine Learning

2010 Edition
| Editors: Claude Sammut, Geoffrey I. Webb

PAC Learning

  • Thomas Zeugmann
Reference work entry
DOI: https://doi.org/10.1007/978-0-387-30164-8_625

Synonyms

Motivation and Background

A very important learning problem is the task of learning a concept.  Concept learning has attracted much attention in learning theory. For having a running example, we look at humans who are able to distinguish between different “things,” e.g., chair, table, car, airplane, etc. There is no doubt that humans have to learn how to distinguish “things.” Thus, in this example, each concept is a thing. To model this learning task, we have to convert “real things” into mathematical descriptions of things. One possibility to do this is to fix some language to express a finitelist of properties. Afterward, we decide which of these properties are relevant for the particular things we want to deal with and which of them have to be fulfilled or not to be fulfilled, respectively. The list of properties comprises qualities or traits such as “has four legs,” “has wings,”...

This is a preview of subscription content, log in to check access.

Recommended Reading

  1. Angluin, D. (1988). Queries and concept learning. Machine Learning, 2(4), 319–342.Google Scholar
  2. Angluin, D. (1992). Computational learning theory: Survey and selected bibliography. In Proceedings of the twenty-fourth annual ACM symposium on theory of computing (pp. 351–369). New York: ACM Press.Google Scholar
  3. Anthony, M., & Biggs, N. (1992). Computational learning theory: Cambridge tracts in theoretical computer science (No. 30). Cambridge: Cambridge University Press.Google Scholar
  4. Anthony, M., Biggs, N., & Shawe-Taylor, J. (1990). The learnability of formal concepts. In M. A. Fulk & J. Case (Eds.), Proceedings of the third annual workshop on computational learning theory (pp. 246–257). San Mateo, CA: Morgan Kaufmann.Google Scholar
  5. Arora, S., & Barak, B. (2009). Computational complexity: A modern approach. Cambridge: Cambridge University Press.MATHGoogle Scholar
  6. Blumer, A., Ehrenfeucht, A., Haussler, D., & Warmuth, M. K. (1987). Occam’s razor. Information Processing Letters, 24(6), 377–380.MathSciNetMATHCrossRefGoogle Scholar
  7. Blumer, A., Ehrenfeucht, A., Haussler, D., & Warmuth, M. K. (1989). Learnability and the Vapnik-Chervonenkis dimension. Journal of the ACM, 36(4), 929–965.MathSciNetMATHCrossRefGoogle Scholar
  8. Ehrenfeucht, A., Haussler, D., Kearns, M., & Valiant, L. (1988). A general lower bound on the number of examples needed for learning. In D. Haussler & L. Pitt (Eds.), COLT ’88, Proceedings of the 1988 workshop on computational learning theory, August 3–5, 1988, MIT (pp. 139–154). San Francisco: Morgan Kaufmann.Google Scholar
  9. Haussler, D. (1987). Bias, version spaces and Valiant’s learning framework. In P. Langley (Ed.), Proceedings of the fourth international workshop on machine learning (pp. 324–336). San Mateo, CA: Morgan Kaufmann.Google Scholar
  10. Haussler, D., Kearns, M., Littlestone, N., & Warmuth, M. K. (1991). Equivalence of models for polynomial learnability. Information and Computation, 95(2), 129–161.MathSciNetMATHCrossRefGoogle Scholar
  11. Kearns, M. J., & Vazirani, U. V. (1994). An introduction to computational learning theory. Cambridge, MA: MIT Press.Google Scholar
  12. Linial, N., Mansour, Y., & Rivest, R. L. (1991). Results on learnability and the Vapnik–Chervonenkis dimension. Information and Computation, 90(1), 33–49.MathSciNetMATHCrossRefGoogle Scholar
  13. Littlestone, N. (1988). Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning, 2(4), 285–318.Google Scholar
  14. Maas, W., & Turan, G. (1990). On the complexity of learning from counterexamples and membership queries. In Proceedings of the thirty-first annual symposium on Foundations of Computer Science (FOCS 1990), St. Louis, Missouri, October 22–24, 1990 (pp. 203–210). Los Alamitos, CA: IEEE Computer Society.Google Scholar
  15. Natarajan, B. K. (1991). Machine learning: A theoretical approach. San Mateo, CA: Morgan Kaufmann.Google Scholar
  16. Pitt, L., & Valiant, L. G. (1988). Computational limitations on learning from examples. Journal of the ACM, 35(4), 965–984.MathSciNetMATHCrossRefGoogle Scholar
  17. Schapire, R. E. (1990). The strength of weak learnability. Machine Learning, 5(2), 197–227.Google Scholar
  18. Valiant, L. G. (1984). A theory of the learnable. Communications of the ACM, 27(11), 1134–1142.MATHCrossRefGoogle Scholar
  19. Xiao, D. (2009). On basing { ZK}≠{ BPP} on the hardness of PAC learning. In Proceedings of the twenty-fourth annual IEEE Conference on Computational Complexity, CCC 2009, Paris, France, July 15–18, 2009 (pp. 304–315). Los Alamitos, CA: IEEE Computer Society.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  • Thomas Zeugmann
    • 1
  1. 1.Hokkaido UniversitySapparoJapan