Abstract
In previous work, we described a knowledge-intensive inductive learning algorithm called abductive explanation-based learning (A-EBL) that uses background knowledge to improve the performance of a concept learner. A disadvantage of A-EBL is that it is not incremental. This article describes an alternative learning algorithm called IA-EBL that learns incrementally; IA-EBL replaces the set-cover-based learning algorithm of A-EBL with an extension of a perceptron learning algorithm. IA-EBL is in most other respects comparable to A-EBL, except that the output of the learning system can no longer be easily expressed as a logical theory. In this article, IA-EBL is described, analyzed according to Littlestone's model of mistake-bounded learnability, and finally compared experimentally to A-EBL. IA-EBL is shown to provide order-of-magnitude speedups over A-EBL in two domains when used in an incremental setting.
Article PDF
Similar content being viewed by others
References
Abramson, Norman. (1963). Information theory and coding. New York: McGraw Hill.
Ali, Karmal. (1989). Augmenting domain theory for explanation-based generalization. In Proceedings of the Sixth International Workshop on Machine Learning. Ithaca, NY: Morgan Kaufmann.
Bergadano, Francesco, & Giordana, Attilio. (1990). Guiding induction with domain theories. In Machine learning: An artificial intelligence approach, Vol. 3 (pp. 474–492). Morgan Kaufmann.
Blum, Avrim, Hellerstein, Lisa, & Littlestone, Nick. (1991). Learning in the presence of finitely or infinitely many irrelevant attributes. In Proceedings of the Fourth Annual Workshop on Computational Learning Theory. Santa Cruz, CA: Morgan Kaufmann.
Blum, Avrim. (1990). Separating PAC and mistake-bound learning models over the boolean domain. In Proceedings of the Third Annual Workshop on Computational Learning Theory. Rochester, NY: Morgan Kaufmann.
Cohen, William W. (1990). Learning from textbook knowledge: A case study. In Proceedings of the Eighth National Conference on Artificial Intelligence. Boston, MA: MIT Press.
Cohen, William W. (1991). The generality of overgenerality. In Proceedings of the Eighth International Workshop on Machine Learning. Ithaca, NY: Morgan Kaufmann.
Cohen, William W. (1992). Abductive explanation based learning: A solution to the multiple inconsistent explanation problem. Machine Learning, 8 (2), 167–219.
Cohen, William W. (1992). Compiling knowledge into an explicit bias. In Proceedings of the Ninth International Conference on Machine Learning. Aberdeen, Scotland: Morgan Kaufmann.
Danyluk, Andrea. (1989). Finding new rules for incomplete theories. In Proceedings of the Sixth International Workshop on Machine Learning. Ithaca, NY: Morgan Kaufmann.
de Kleer, Johan, & Williams, Brian C. (1987). Diagnosing multiple faults. Artificial Intelligence, 32, 97–130.
DeJong, Gerald, & Mooney, Raymond (1986). Explanation-based learning: An alternative view. Machine Learning, 1 (2), 145–176.
Drastal, George, Czako, Gabor, & Raatz, Stan. (1989). Induction in abstraction spaces: A form of constructive induction. In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence. Detroit, MI: Morgan Kaufmann.
Fawcett, Tom. (1989). Learning from plausible explanations. In Proceedings of the Sixth International Workshop on Machine Learning. Ithaca, NY: Morgan Kaufmann.
Flann, Nicholas & Dietterich, Thomas. (1989). A study of explanation-based methods for inductive learning. Machine Learning, 4 (2), 187–226.
Hirsh, Haym. (1989). Combining empirical and analytic learning with version spaces. In Proceedings of the Sixth International Workshop on Machine Learning. Ithaca, NY: Morgan Kaufmann.
Hirsh, Haym. (1990). Incremental version space merging: A general framework for concept learning. Boston, MA: Kluwer Academic Publishers.
Katz, Bruce F. (1989). Integrated learning in a neural network. In Proceedings of the Sixth International Workshop on Machine Learning. Ithaca, NY: Morgan Kaufmann.
Kearns, Micheal, Li, Ming, Pitt, Leonard, & Valiant, Les. (1987). On the learnability of boolean formulae. In 19th Annual Symposium on the Theory of Computing. ACM Press.
Langley, Pat, Gennari, John, & Iba, Wayne. (1987). Hill-climbing theories of learning. In Proceedings of the Fourth International Workshop on Machine Learning. Irvine, CA: Morgan Kaufmann.
Littlestone, Nick. (1988). Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning, 2 (4), 161–186.
Littlestone, Nick. (1989). From on-line to batch learning. In Proceedings of the 1989 Workshop on Computational Learning Theory (pp. 269–284). Santa Cruz, CA: Morgan Kaufmann.
Littlestone, Nick. (1989). Mistake bounds and logarithmic linear-threshold learning algorithms. (Technical Report UCSC-CRL-89-11). Department of Computer and Information Sciences, University of California/ Santa Cruz.
Mahadevan, Sridhar. (1989). Using determinations in EBL: A solution to the incomplete theory problem. In Proceedings of the Sixth International Workshop on Machine Learning. Ithaca, NY: Morgan Kaufmann.
Mooney, Ray, & Ourston, Dirk. (1989). Induction over the unexplained. In Proceedings of the Sixth International Workshop on Machine Learning. Ithaca, NY: Morgan Kaufmann.
O'Rorke, Paul, Morris, Steven, & Schulenburg, David. (1989). Theory formation by abduction: Initial results of a case study based on the chemical revolution. In Proceedings of the Sixth International Workshop on Machine Learning. Ithaca, NY: Morgan Kaufmann.
O'Rorke, Paul. (1988). Automated abduction and machine learning. In Proceedings of the 1988 Spring Symposium on Explanation Based Learning. Palo Alto, CA: AAAI.
Ourston, Dirk, & Mooney, Raymond. (1990). Changing the rules: A comprehensive approach to theory refinement. In Proceedings of the Eighth National Conference on Artificial Intelligence. Boston, MA: MIT Press.
Pazzani, Michael, Brunk, Clifford, & Silverstein, Glenn. (1991). A knowledge-intensive approach to learning relational concepts. In Proceedings of the Eighth International Workshop on Machine Learning. Ithaca, NY: Morgan Kaufmann.
Pazzani, Michael. (1988). Selecting the best explanation in explanation-based learning. In Proceedings of the 1988 Spring Symposium on Explanation Based Learning. Palo Alto, CA: AAAI.
Rajamoney, Shankar, & DeJong, Gerald. (1988). Active explanation reduction: An approach to the multiple explanation problem. In Proceedings of the Fifth International Conference on Machine Learning. Ann Arbor, MI: Morgan Kaufmann.
Reiter, Ray. (1987). A theory of diagnosis from first principles. Artificial Intelligence, 32, 57–95.
Richards, Bradley, & Mooney, Raymond. (1991). First-order theory revision. In Proceedings of the Eighth International Workshop on Machine Learning. Ithaca, NY: Morgan Kaufmann.
Rosenbloom, Paul, & Aasman, Jans. (1990). Knowledge level and inductive uses of chunking (ebl). In Proceedings of the Eighth National Conference on Artificial Intelligence. Boston, MA: MIT Press.
Towell, Geoffrey, Shavlik, Jude, & Noordewier, Michiel. (1990). Refinement of approximate domain theories by knowledge-based artificial neural networks. In Proceedings of the Eighth National Conference on Artificial Intelligence. Boston, MA: MIT Press.
Utgoff, Paul. (1989). Incremental induction of decision trees. Machine Learning, 4 (2).
Valiant, L.G. (1984). A theory of the learnable. Communications of the ACM, 27 (11), 1134–1142.
Wogulis, James. (1991). Revising relational domain theories. In Proceedings of the Eighth International Workshop on Machine Learning. Ithaca, NY: Morgan Kaufmann.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Cohen, W.W. Incremental Abductive EBL. Machine Learning 15, 5–24 (1994). https://doi.org/10.1023/A:1022657002577
Issue Date:
DOI: https://doi.org/10.1023/A:1022657002577