Inductive strengthening: The effects of a simple heuristic for restricting hypothesis space search

  • Foster John Provost
  • Bruce G. Buchanan
Submitted Papers
Part of the Lecture Notes in Computer Science book series (LNCS, volume 642)


This paper examines the effects on a learning program of using a simple heuristic for restricting hypothesis space search and suggests the desirability of making the heuristic explicit so that it can be altered easily. The heuristic is: only consider new hypotheses that cover at least one example not covered by the current concept description. We study the use of the heuristic for three tasks: (i) restricting subsequent search after part of a concept description is learned, (ii) restricting search when partial concept descriptions are provided as initial knowledge, and (iii) restricting search when the heuristic is used as a constant bias to a higher level program that adjusts the bias of the learning program along different dimensions. We show that not only does this heuristic reduce the number of nodes searched, it also reduces the size of the resultant concept description and increases its predictive accuracy.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Breiman, L., Friedman, J., Olshen, R., & Stone, C. (1984). Classification and Regression Trees. Belmont, CA: Wadsworth International Group.Google Scholar
  2. Buchanan, B., & Mitchell, T. (1978). Model-directed Learning of Production Rules. In D.A. Waterman & F. Hayes-Roth (Ed.), Pattern Directed Inference Systems (pp. 297–312). New York: Academic Press.Google Scholar
  3. Clearwater, S., & Provost, F. (1990). RL4: A Tool for Knowledge-Based Induction. In Proceedings of the Second International IEEE Conference on Tools for Artificial Intelligence, (pp. 24–30). IEEE C.S. Press.Google Scholar
  4. Michalski, R., Mozetic, I., Hong, J., & Lavrac, N. (1986). The Multi-purpose Incremental Learning System AQ15 and its Testing Application to Three Medical Domains. In Proceedings of the Fifth National Conference on Artificial Intelligence, (pp. 1041–1045). AAAI-Press.Google Scholar
  5. Mitchell, T. (1980). The Need for Biases in Learning Generalizations (Technical Report CBM-TR-117). Department of Computer Science, Rutgers University.Google Scholar
  6. Provost, F. (1991). ClimBS (Technical Report ISL-91-1). Department of Computer Science, University of Pittsburgh.Google Scholar
  7. Provost, F., & Buchanan, B. (1992). Inductive Policy. To appear in Proceedings of the Tenth National Conference on Artificial Intelligence. AAAI-Press.Google Scholar
  8. Provost, F. J. (1992). ClimBS: Searching the Bias Space. Submitted to The Fourth International Conference on Tools for Artificial Intelligence.Google Scholar
  9. Quinlan, J. (1986). Induction of Decision Trees. Machine Learning, 1, 81–106.Google Scholar
  10. Quinlan, J. (1987). Generating Production Rules from Decision Trees. In Proceedings of the Tenth International Joint Conference on Artificial Intelligence.Google Scholar
  11. Rendell, L. (1986). A General Framework for Induction and a Study of Selective Induction. Machine Learning, 1, 177–226.Google Scholar
  12. Schlimmer, J. (1987). Concept Acquisition Through Representational Adjustment. Ph.D. Thesis, Department of Information and Computer Science, University of California at Irvine.Google Scholar
  13. Utgoff, P. (1984). Shift of Bias for Inductive Concept Learning. Ph.D. Thesis, Rutgers University.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1992

Authors and Affiliations

  • Foster John Provost
    • 1
  • Bruce G. Buchanan
    • 1
  1. 1.Department of Computer ScienceUniversity of PittsburghPittsburghUSA

Personalised recommendations