Machine Learning

, Volume 17, Issue 1, pp 69–105 | Cite as

Quantifying prior determination knowledge using the PAC learning model

  • Sridhar Mahadevan
  • Prasad Tadepalli


Prior knowledge, or bias, regarding a concept can reduce the number of examples needed to learn it. Probably Approximately Correct (PAC) learning is a mathematical model of concept learning that can be used to quantify the reduction in the number of examples due to different forms of bias. Thus far, PAC learning has mostly been used to analyzesyntactic bias, such as limiting concepts to conjunctions of boolean prepositions. This paper demonstrates that PAC learning can also be used to analyzesemantic bias, such as a domain theory about the concept being learned. The key idea is to view the hypothesis space in PAC learning as that consistent withall prior knowledge, syntactic and semantic. In particular, the paper presents an analysis ofdeterminations, a type of relevance knowledge. The results of the analysis reveal crisp distinctions and relations among different determinations, and illustrate the usefulness of an analysis based on the PAC learning model.


Determinations PAC learning bias prior knowledge incomplete theories 


  1. Angluin, D. (1988). “Queries and concept learning,”Machine Learning, 2.Google Scholar
  2. Blumer, A., Ehrefeucht, A., Haussler, D., and Warmuth, M. (1989). “Learnability and the Vapnik-Chervonenkis Dimension”,Journal of the ACM, 36 (4):929–965.Google Scholar
  3. Brooks, R. (1991). “Intelligence without reason,” inProceeding of the 12th IJCAI, Morgan Kaufmann.Google Scholar
  4. Danyluk, A. (1989). “Finding new rules for incomplete theories: Explicit biases for induction with contextual information,” inProceeding of the Sixth International Machine Learning Workshop, Morgan-Kaufmann.Google Scholar
  5. Davies, T., and Russell, S. (1987). “A logical approach reasoning by analogy,” inProceedings of The Tenth International Joint Conference on Artificial Intelligence, Morgan Kaufmann.Google Scholar
  6. Davies, T. (1988). “Determination, uniformity, and relevance: Normative criteria for generalization and reasoning by analogy,” in D. H. Helman, editor,Analogical Reasoning, pages 227–250. Kluwer Academic Publishers.Google Scholar
  7. Dejong, G., and Mooney, R. (1986). “Explanation-Based Learning: An alternative view,”Machine Learning, 1 (2).Google Scholar
  8. desJardins, M. (1989). “Probabilistic evaluation of bias for learning systems,” inProceedings of the Sixth International Machine Learning Workshop, pages 495–499. Morgan-Kaufmann.Google Scholar
  9. Dietterich, T. (1989). “Limitations on inductive learning,” inProceedings of the Sixth Machine Learning Workshop, pages 124–128. Morgan Kaufmann.Google Scholar
  10. Hall, R. (1988). “Learning by failing to explain,”Machine Learning, 3. (1)Google Scholar
  11. Haussler, D., Littlestone, N., and Warmuth, M. (1988). “Predicting 0,1 functions on randomly drawn points,” inProceedings of the 29th IEEE Symposium on Foundations of Computer Science, pages 100–109.Google Scholar
  12. Haussler, D. (1988). “Quantifying inductive bias: “AI learning algorithms and Valiant's learning framework,”Artificial Intelligence, 36 (2).Google Scholar
  13. Hirsh, H. (1989).Incremental Version-Space Merging: A General Framework for Concept Learning. PhD thesis, Standford University.Google Scholar
  14. Kearns, M., and Valiant, L. (1989). “Cryptographic limitations on learning boolean formulae and finite automata,” inProceedings of the 21st Annual ACM Symposium on Theory of Computing.Google Scholar
  15. Korf, R. (1985). “Macro-Operators: A Weak Method for Learning,”Artificial Intelligence, 26 (1): 35–77.Google Scholar
  16. Mahadevan, S., and Tadepalli, P. (1988). “On the tractability of learning from incomplete theories,” inProceedings of the Fifth International Machine Learning Conference, pages 235–241. Morgan-Kaufmann.Google Scholar
  17. Mahadevan, S. (1989). “Using determinations in EBL: A solution to the incomplete theory problem,” inProceedings of the 6th International Workshop on Machine Learning, pages 320–325. Morgan-Kaufmann.Google Scholar
  18. Mitchell, T., Keller, R., and Kedar-Cabelli, S. (1986). “Explanation-based generalization: A unifying view,”Machine Learning, 1 (1).Google Scholar
  19. Mitchell, T. (1980). “The need for biases in learning generalizations,” Technical Report CBM-TR-117, Rutgers University.Google Scholar
  20. Natarajan, B., and Tadepalli, P. (1988). “Two new frameworks for learning,” inProceedings of the Fifth International Machine Learning Conference, Morgan-Kaufmann.Google Scholar
  21. Natarajan, B. (1987). “On learning boolean functions,” inACM Symposium on Theory of Computing.Google Scholar
  22. Natarajan, B. (1989). “On learning sets and functions,” Machine Learning, 4 (1).Google Scholar
  23. Natarajan, B. (1991).Machine Learning: A Theoretical Approach, Morgan Kaufmann.Google Scholar
  24. Pazzani, M. (1992). “The Utility of Knowledge in Inductive Learning,”Machine Learning, 9(1): 57–94.Google Scholar
  25. Pearl, J. (1988).Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann.Google Scholar
  26. Pitt, L., and Valiant, L.G. (1988). “Computational limits of learning from examples.”Journal of the ACM, 35 (4): 965–984.Google Scholar
  27. Pitt, L., and Warmuth, M. (1990). “Prediction preserving reducibility,”Journal of Computer and System Sciences, 41 (3).Google Scholar
  28. Rich, E., and Knight, K. (1991).Artificial Intelligence. McGraw-Hill.Google Scholar
  29. Rivest, R. (1987). “Learning decision lists,”Machine Learning, 2(3): 229–246.Google Scholar
  30. Russell, S., and Grosof, B. (1989). “A sketch of autonomous learning using declarative bias,” in P. Brazdil and K. Konolige, editors,Machine Learning, Meta-Reasoning, and Logics, Kluwer Academic.Google Scholar
  31. Russell, S. (1986).Analogical and Inductive Reasoning, PhD thesis, Stanford University.Google Scholar
  32. Russell, S. (1987). “Analogy and single instance generalization,” inProceedings of the 4th International Conference on Machine Learning, pages 390–397. Morgan-Kaufmann.Google Scholar
  33. Russell, S. (1988). “Tree-structured bias,” inProceedings of the Seventh National Conference on Artificial Intelligence (AAAI), pages 641–645. Morgan-Kaufmann.Google Scholar
  34. Russell, S. (1989).The use of knowledge in analogy and induction. Morgan Kaufmann.Google Scholar
  35. Tadepalli, P. (1991). “A formalization of explanation-based macro-operator learning,” inProceedings of the IJCAI, pages 616–622.Google Scholar
  36. Tadepalli, P. (1993). “Learning from queries and examples with tree-structured bias,” inProceedings of the Tenth International Machine Learning Conference, Morgan-Kaufmann.Google Scholar
  37. Valiant, L. (1984). “A theory of the learnable,”Communications of the ACM, 27 (11).Google Scholar

Copyright information

© Kluwer Academic Publishers 1994

Authors and Affiliations

  • Sridhar Mahadevan
    • 1
  • Prasad Tadepalli
    • 2
  1. 1.Department of Computer Science and EngineeringUniversity of South FloridaTampa
  2. 2.Department of Computer ScienceOregon State UniversityCorvallis

Personalised recommendations