Machine Learning

, Volume 15, Issue 1, pp 5–24 | Cite as

Incremental abductive EBL

  • William W. Cohen
Article
  • 211 Downloads

Abstract

In previous work, we described a knowledge-intensive inductive learning algorithm called abductive explanation-based learning (A-EBL) that uses background knowledge to improve the performance of a concept learner. A disadvantage of A-EBL is that it is not incremental. This article describes an alternative learning algorithm called IA-EBL that learns incrementally; IA-EBL replaces the set-cover-based learning algorithm of A-EBL with an extension of a perceptron learning algorithm. IA-EBL is in most other respects comparable to A-EBL, except that the output of the learning system can no longer be easily expressed as a logical theory. In this article, IA-EBL is described, analyzed according to Littlestone's model of mistake-bounded learnability, and finally compared experimentally to A-EBL. IA-EBL is shown to provide order-of-magnitude speedups over A-EBL in two domains when used in an incremental setting.

Keywords

inductive learning combining empirical and analytical learning pac-learning explanation-based learning abduction 

References

  1. Abramson, Norman. (1963).Information theory and coding. New York: McGraw Hill.Google Scholar
  2. Ali, Karmal. (1989). Augmenting domain theory for explanation-based generalization. InProceedings of the Sixth International Workshop on Machine Learning. Ithaca, NY: Morgan Kaufmann.Google Scholar
  3. Bergadano, Francesco, & Giordana, Attilio. (1990). Guiding induction with domain theories. InMachine learning: An artificial intelligence approach, Vol. 3 (pp. 474–492). Morgan Kaufmann.Google Scholar
  4. Blum, Avrim, Hellerstein, Lisa, & Littlestone, Nick. (1991). Learning in the presence of finitely or infinitely many irrelevant attributes. InProceedings of the Fourth Annual Workshop on Computational Learning Theory. Santa Cruz, CA: Morgan Kaufmann.Google Scholar
  5. Blum, Avrim. (1990). Separating PAC and mistake-bound learning models over the boolean domain. InProceedings of the Third Annual Workshop on Computational Learning Theory. Rochester, NY: Morgan Kaufmann.Google Scholar
  6. Cohen, William W. (1990). Learning from textbook knowledge: A case study. InProceedings of the Eighth National Conference on Artificial Intelligence. Boston, MA: MIT Press.Google Scholar
  7. Cohen, William W. (1991). The generality of overgenerality. InProceedings of the Eighth International Workshop on Machine Learning. Ithaca, NY: Morgan Kaufmann.Google Scholar
  8. Cohen, William W. (1992). Abductive explanation based learning: A solution to the multiple inconsistent explanation problem.Machine Learning, 8(2, 167–219.Google Scholar
  9. Cohen, William W. (1992). Compiling knowledge into an explicit bias. InProceedings of the Ninth International Conference on Machine Learning. Aberdeen, Scotland: Morgan Kaufmann.Google Scholar
  10. Danyluk, Andrea. (1989). Finding new rules for incomplete theories. InProceedings of the Sixth International Workshop on Machine Learning. Ithaca, NY: Morgan Kaufmann.Google Scholar
  11. de Kleer, Johan, & Williams, Brian C. (1987). Diagnosing multiple faults.Artificial Intelligence, 32, 97–130.Google Scholar
  12. DeJong, Gerald, & Mooney, Raymond (1986). Explanation-based learning: An alternative view.Machine Learning, 1(2, 145–176.Google Scholar
  13. Drastal, George, Czako, Gabor, & Raatz, Stan. (1989). Induction in abstraction spaces: A form of constructive induction. InProceedings of the Eleventh International Joint Conference on Artificial Intelligence. Detroit, MI: Morgan Kaufmann.Google Scholar
  14. Fawcett, Tom. (1989). Learning from plausible explanations. InProceedings of the Sixth International Workshop on Machine Learning. Ithaca, NY: Morgan Kaufmann.Google Scholar
  15. Flann, Nicholas & Dietterich, Thomas. (1989). A study of explanation-based methods for inductive learning.Machine Learning, 4(2, 187–226.Google Scholar
  16. Hirsh, Haym. (1989). Combining empirical and analytic learning with version spaces. InProceedings of the Sixth International Workshop on Machine Learning. Ithaca, NY: Morgan Kaufmann.Google Scholar
  17. Hirsh, Haym. (1990).Incremental version space merging: A general framework for concept learning. Boston, MA: Kluwer Academic Publishers.Google Scholar
  18. Katz, Bruce F. (1989). Integrated learning in a neural network. InProceedings of the Sixth International Workshop on Machine Learning. Ithaca, NY: Morgan Kaufmann.Google Scholar
  19. Kearns, Micheal, Li, Ming, Pitt, Leonard, & Valiant, Les. (1987). On the learnability of boolean formulae. In19th Annual Symposium on the Theory of Computing. ACM Press.Google Scholar
  20. Langley, Pat, Gennari, John, & Iba, Wayne. (1987). Hill-climbing theories of learning. InProceedings of the Fourth International Workshop on Machine Learning. Irvine, CA: Morgan Kaufmann.Google Scholar
  21. Littlestone, Nick. (1988). Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm.Machine Learning, 2(4, 161–186.Google Scholar
  22. Littlestone, Nick. (1989). From on-line to batch learning. InProceedings of the 1989 Workshop on Computational Learning Theory (pp. 269–284). Santa Cruz, CA: Morgan Kaufmann.Google Scholar
  23. Littlestone, Nick. (1989). Mistake bounds and logarithmic linear-threshold learning algorithms. (Technical Report UCSC-CRL-89-11). Department of Computer and Information Sciences, University of California/Santa Cruz.Google Scholar
  24. Mahadevan, Sridhar. (1989). Using determinations in EBL: A solution to the incomplete theory problem. InProceedings of the Sixth International Workshop on Machine Learning. Ithaca, NY: Morgan Kaufmann.Google Scholar
  25. Mooney, Ray, & Ourston, Dirk. (1989). Induction over the unexplained. InProceedings of the Sixth International Workshop on Machine Learning. Ithaca, NY: Morgan Kaufmann.Google Scholar
  26. O'Rorke, Paul, Morris, Steven, & Schulenburg, David. (1989). Theory formation by abduction: Initial results of a case study based on the chemical revolution. InProceedings of the Sixth International Workshop on Machine Learning. Ithaca, NY: Morgan Kaufmann.Google Scholar
  27. O'Rorke, Paul. (1988). Automated abduction and machine learning. InProceedings of the 1988 Spring Symposium on Explanation Based Learning. Palo Alto, CA: AAAI.Google Scholar
  28. Ourston, Dirk, & Mooney, Raymond. (1990). Changing the rules: A comprehensive approach to theory refinement. InProceedings of the Eighth National Conference on Artificial Intelligence. Boston, MA: MIT Press.Google Scholar
  29. Pazzani, Michael, Brunk, Clifford, & Silverstein, Glenn. (1991). A knowledge-intensive approach to learning relational concepts. InProceedings of the Eighth International Workshop on Machine Learning. Ithaca, NY: Morgan Kaufmann.Google Scholar
  30. Pazzani, Michael. (1988). Selecting the best explanation in explanation-based learning. InProceedings of the 1988 Spring Symposium on Explanation Based Learning. Palo Alto, CA: AAAI.Google Scholar
  31. Rajamoney, Shankar, & DeJong, Gerald. (1988). Active explanation reduction: An approach to the multiple explanation problem. InProceedings of the Fifth International Conference on Machine Learning. Ann Arbor, MI: Morgan Kaufmann.Google Scholar
  32. Reiter, Ray. (1987). A theory of diagnosis from first principles.Artificial Intelligence, 32, 57–95.Google Scholar
  33. Richards, Bradley, & Mooney, Raymond. (1991). First-order theory revision. InProceedings of the Eighth International Workshop on Machine Learning. Ithaca, NY: Morgan Kaufmann.Google Scholar
  34. Rosenbloom, Paul, & Aasman, Jans. (1990). Knowledge level and inductive uses of chunking (ebl). InProceedings of the Eighth National Conference on Artificial Intelligence. Boston, MA: MIT Press.Google Scholar
  35. Towell, Geoffrey, Shavlik, Jude, & Noordewier, Michiel. (1990). Refinement of approximate domain theories by knowledge-based artificial neural networks. InProceedings of the Eighth National Conference on Artificial Intelligence. Boston, MA: MIT Press.Google Scholar
  36. Utgoff, Paul. (1989). Incremental induction of decision trees.Machine Learning, 4(2).Google Scholar
  37. Valiant, L.G. (1984). A theory of the learnable.Communications of the ACM, 27(11, 1134–1142.Google Scholar
  38. Wogulis, James. (1991). Revising relational domain theories. InProceedings of the Eighth International Workshop on Machine Learning. Ithaca, NY: Morgan Kaufmann.Google Scholar

Copyright information

© Kluwer Academic Publishers 1994

Authors and Affiliations

  • William W. Cohen
    • 1
  1. 1.AT&T Bell LaboratoriesMurray Hill

Personalised recommendations