Abstract
This article describes how a reasoner can improve its understanding of an incompletely understood domain through the application of what it already knows to novel problems in that domain. Case-based reasoning is the process of using past experiences stored in the reasoner's memory to understand novel situations or solve novel problems. However, this process assumes that past experiences are well understood and provide good “lessons” to be used for future situations. This assumption is usually false when one is learning about a novel domain, since situations encountered previously in this domain might not have been understood completely. Furthermore, the reasoner may not even have a case that adequately deals with the new situation, or may not be able to access the case using existing indices. We present a theory of incremental learning based on the revision of previously existing case knowledge in response to experiences in such situations. The theory has been implemented in a case-based story understanding program that can (a) learn a new case in situations where no case already exists, (b) learn how to index the case in memory, and (c) incrementally refine its understanding of the case by using it to reason about new situations, thus evolving a better understanding of its domain through experience. This research complements work in case-based reasoning by providing mechanisms by which a case library can be automatically built for use by a case-based reasoning program.
Article PDF
Similar content being viewed by others
References
Barletta, R., amp; Mark, W. (1988). Explanation-based indexing of cases. In Proceedings of theSeventh National Conference on Artificial Intelligence (pp. 541–546), St. Paul, MN, August.
Bhatta, S., amp; Ram, A. (1991). Learning indices for schema selection. In M.B. Fishman (Ed.) Proceedings ofthe Florida Artificial Intelligence Research Symposium (pp. 226–231), Cocoa Beach, FL. Florida AIResearchSociety.
Cox, M. amp; Ram, A. (1991). Using introspective reasoning to select learning strategies. In R.S. Michalski amp; G. Tecuci (Eds.), Proceedings of the First International Workshop on Multi-Strategy Learning (pp. 217–230), Harpers
Ferry, WV, November. Fairfax, VA: Center for Artificial Intelligence, George Mason University.
DeJong, G.F., amp; Mooney, R.J. (1986). Explanation-based learning: An alternative view. MachineLearning, 1 (2), 145–176.
DeJong, G.F. (1983). An approach to learning from observation. In R.S. Michalski (Ed.), Proceedings of the 1983 International Machine Learning Workshop (pp. 171–176), Monticello, IL, June.Urbana-Champaign, IL: Department of Computer Science, University of Illinois.
Dietterich, T.G., amp; Michalski, R.S. (1981). Inductive learning of structural descriptions: Evaluation criteriaand comparative review of selected methodologies. Artificial Intelligence, 16, 257–294.
Doyle, J. (1979). A truth maintenance system. Artificial Intelligence, 12, 231–272.
Falkenhainer, B. (1988). The utility of difference-based reasoning. In Proceedings of the SeventhNational Conference on Artificial Intelligence (pp. 530–535), St. Paul, MN, August.
Flann, N.S., amp; Dietterich, T.G. (1989). A study of explanation-based methods for inductive learning.Machine Learning, 4, 187–226.
Gupta, A. (1987). Explanation-based failure recovery. In Proceedings of the Sixth National Conferenceon Artificial Intelligence (pp. 606–610), Seattle, WA, July.
Hammond, K.J. (Ed.). (1989). Proceedings: Second Case-Based Reasoning Workshop. PensacolaBeach, FL: Morgan Kaufmann.
Hobbs, J., Stickel, M., Appelt, D., amp; Martin, P. (1990). Interpretation as abduction (Technical Note499). Stanford, CA: SRI International.
Kass, A., amp; Owens, C. (1988). Learning new explanations by incremental adaptation. In Proceedingsof the AAAI Spring Symposium on Explanation-Based Learning. Stanford, CA: AAAI.
Kass, A., Leake, D., amp; Owens, C. (1986). SWALE: A program that explains. In R.C. Schank(Ed.), Explanation patterns: Understanding mechanically and creatively. Hillsdale, NJ: Lawrence ErlbaumAssociates, pp. 232–254.
Keller, R.M. (1988). Defining operationality for explanation-based learning. Artificial Intelligence,35, 227–241.
Kolodner, J.L. (Ed.). (1988). Proceedings of a Workshop on Case-Based Reasoning. Clearwater Beach,FL: Morgan Kaufmann.
Leake, D. (1989a). Anomaly detection strategies for schema-based story understanding. In Proceedings ofthe Eleventh Annual Conference of the Cognitive Science Society (pp. 490–497). Ann Arbor, MI: Cognitive Science Society.
Leake, D. (1989b). Evaluating explanations. Ph.D. thesis, Department of Computer Science, YaleUniversity, New Haven, CT.
Leake, D. (1989c). The effect of explainer goals on case-based explanation. In Proceedings of aWorkshop on Case-Based Reasoning, Pensacola Beach, FL. Morgan Kaufmann.
Minton, S. (1988). Learning effective search control knowledge: An explanation-based approach (TechnicalReport CMU-CS-88-133). Ph.D. thesis, Computer Science Department, Carnegie-Mellon University, Pittsburgh,PA.
Mitchell, T.M., Keller, R., amp; Kedar-Cabelli, S. (1986). Explanation-based generalization: A unifying view.Machine Learning, 1 (1), 47–80.
Mitchell, T.M. (1983). Learning and problem solving. In Proceedings of the Eighth International JointConference on Artificial Intelligence (pp. 1139–1151), Karlsruhe, West Germany. Morgan Kaufman.
Mooney, R.J., amp; DeJong G.F. (1985). Learning schemata for natural language processing. In Proceedings of the Ninth International Joint Conference on Artificial Intelligence (pp. 681–687), LosAngeles, CA, August.
Mooney, R.J. (1990). Explanation-based learning as concept formation. Presented at the Symposiumon Computational Approaches to Concept Formation, January, Palo Alto, CA.
Morris, S., amp; O'Rorke, P. (1990). An approach to theory revision using abduction. In Proceedings ofthe AAAI Spring Symposium on Automated Abduction, Palo Alto, CA.
Mostow, J. amp; Bhatnagar, N. (1987). FAILSAFE-A floor planner that uses EBG to learn from its failures. In Proceedings of the Tenth International Joint Conference on Artificial Intelligence, (pp. 249–255),Milan, Italy, August.
Pazzani, M., Dyer, M. amp; Flowers, M. (1986). The role of prior causal theories in generalization. In Proceedings of the Fifth National Conference on Artificial Intelligence (pp. 545–550), Philadelphia, PA,August.
Porter, B.W., Bareiss, R., amp; Holte, R.C. (1990). Concept learning and heuristic classification in weak-theorydomains. Artificial Intelligence, 45 (1–2), 229–263.
Ram, A., amp; Hunter L. (1992). The use of explicit goals for knowledge to guide inference and learning. Applied Intelligence, 2 (I), 47–73.
Ram, A., amp; Leake, D. (1991). Evaluation of explanatory hypotheses. In Proceedings of theThirteenth Annual Conference of the Cognitive Science Society, Chicago, IL, August.
Ram, A. (1989). Question-driven understanding: An integrated theory of story understanding, memory andlearning (Research Report #710). Ph.D. thesis, Department of Computer Science, Yale University, NewHaven, CT.
Ram, A. (1990a). Decision models: A theory of volitional explanation. In Proceedings of the TwelfthAnnual Conference of the Cognitive Science Society (pp. 198–205), Cambridge, MA. Hillsdale, NJ: Lawrence Erlbaum Associates.
Ram, A. (1990b). Goal-based explanation. In Proceedings of the AAAI Spring Symposium on AutomatedAbduction, Palo Alto, CA, March.
Ram, A. (1990c). Incremental learning of explanation patterns and their indices. In B.W. Porter and R.J. Mooney (Eds.) Proceedings ofthe Seventh International Conference on Machine Learning (pp. 313–320), Austin, TX.
Ram, A. (1990d). Knowledge goals: A theory of interestingness. In Proceedings of the Twelfth AnnualConference of the Cognitive Science Society (pp. 206–214). Cambridge, MA. Hillsdale, NJ: LawrenceErlbaum Associates.
Ram, A. (1991). A theory of questions and question asking. The Journal of the Learning Sciences, 1 (3amp;4), 273–318.
Reiger, C. (1975). Conceptual memory and inference. In R.C. Schank (Ed.), ConceptualInformation Processing. Amsterdam: North-Holland.
Schank, R.C. amp; Abelson, R. (1977). Scripts, plans, goals, and understanding: An inquiry into humanknowledge structures. Hillsdale, NJ: Lawrence Erlbaum Associates.
Schank, R.C., Collins, G. amp; Hunter, L.E. (1986). Transcending inductive category formation in learning.The Behavioral and Brain Sciences, 9 (4).
Schank, R.C. (1978). Predictive understanding. In R. Campbell and P. Smith (Eds.), RecentAdvances in the Psychology of Language-Formal and Experimental Approaches. New York: Plenum Press, pp.91–101.
Schank, R.C. (1986). Explanation patterns: Understanding mechanically and creatively. Hillsdale, NJ: Lawrence Erlbaum Associates.
Segre, A.M. (1987). Explanation-based learning of generalized robot assembly tasks (Technical ReportUILU-ENG-87-2208). Ph.D. thesis, University of Illinois at Urbana-Champaign, Urbana, IL.
Stickel, M. (1990). A method for abductive reasoning in natural-language interpretation. In Proceedingsof the AAAI Spring Symposium on Automated Abduction, March, Palo Alto, CA.
Wilensky, R. (1978). Understanding goal-based stories. Ph.D. thesis, Department of Computer Science,Yale University, New Haven, CT.
Wilensky, R. (1981). PAM. In R. Schank and C. Riesbeck (Eds.), Inside computer understanding: Fiveprograms plus miniatures. Hillsdale, NJ: Lawrence Erlbaum Associates.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Ram, A. Indexing, Elaboration and Refinement: Incremental Learning of Explanatory Cases. Machine Learning 10, 201–248 (1993). https://doi.org/10.1023/A:1022634926452
Issue Date:
DOI: https://doi.org/10.1023/A:1022634926452