Specialization of recursive predicates

  • Henrik Boström
Papers
Part of the Lecture Notes in Computer Science book series (LNCS, volume 912)

Abstract

When specializing a recursive predicate in order to exclude a set of negative examples without excluding a set of positive examples, it may not be possible to specialize or remove any of the clauses in a refutation of a negative example without excluding any positive examples. A previously proposed solution to this problem is to apply program transformation in order to obtain non-recursive target predicates from recursive ones. However, the application of this method prevents recursive specializations from being found. In this work, we present the algorithm SPECTRE II which is not limited to specializing non-recursive predicates. The key idea upon which the algorithm is based is that it is not enough to specialize or remove clauses in refutations of negative examples in order to obtain correct specializations, but it is sometimes necessary to specialize clauses that appear only in refutations of positive examples. In contrast to its predecessor SPECTRE, the new algorithm is not limited to specializing clauses defining one predicate only, but may specialize clauses defining multiple predicates. Furthermore, the positive and negative examples are no longer required to be instances of the same predicate. It is proven that the algorithm produces a correct specialization when all positive examples are logical consequences of the original program, there is a finite number of derivations of positive and negative examples and when no positive and negative examples have the same sequence of input clauses in their refutations.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bain M. and Muggleton S., ”Non-Monotonic Learning”, in Muggleton S. (ed.), Inductive Logic Programming, Academic Press, London (1992) 145–161Google Scholar
  2. 2.
    Bergadano F. and Giordana A., ”A Knowledge Intensive Approach to Concept Induction”, Proceedings of the Fifth International Conference on Machine Learning, Morgan Kaufmann, CA (1988) 305–317Google Scholar
  3. 3.
    Boström H. and Idestam-Almquist P., ”Specialization of Logic Programs by Pruning SLD-Trees”, Proceedings of the 4th International Workshop on Inductive Logic Programming, volume 237 of GMD-Studien, Gesellschaft für Mathematik und Datenverarbeitung MBH (1994) 31–48Google Scholar
  4. 4.
    Cohen W. W., ”The Generality of Overgenerality”, Machine Learning: Proceedings of the Eighth International Workshop, Morgan Kaufmann (1991) 490–494Google Scholar
  5. 5.
    Cohen W. W., ”Compiling Prior Knowledge Into an Explicit Bias”, Machine Learning: Proceedings of the Ninth International Workshop, Morgan Kaufmann (1992) 102–110Google Scholar
  6. 6.
    Kanamori T. and Kawamura T., ”Preservation of Stronger Equivalence in Unfold/Fold Logic Program Transformation (II)”, ICOT Technical Report TR-403, Japan (1988)Google Scholar
  7. 7.
    Ling C. X., ”Non-Monotonic Specialization”, Proceedings of International Workshop on Inductive Logic Programming, Portugal (1991) 59–68Google Scholar
  8. 8.
    Lloyd J. W., Foundations of Logic Programming, (2nd edition), Springer-Verlag (1987)Google Scholar
  9. 9.
    Ourston D. and Mooney R. J., ”Changing the Rules: A Comprehensive Approach to Theory Refinement”, Proceedings of the Eighth National Conference on Artificial Intelligence, MIT Press (1990) 815–820Google Scholar
  10. 10.
    Pazzani M. and Brunk C., ”Finding Accurate Frontiers: A Knowledge-Intensive Approach to Relational Learning”, Proceedings of the Eleventh National Conference on Artificial Intelligence, Morgan Kaufmann (1993) 328–334Google Scholar
  11. 11.
    Pazzani M., Brunk C. and Silverstein G., ”A Knowledge-Intensive Approach to Learning Relational Concepts”, Machine Learning: Proceedings of the Eighth International Workshop, Morgan Kaufmann (1991) 432–436Google Scholar
  12. 12.
    Quinlan J. R., ”Learning Logical Definitions from Relations”, Machine Learning 5 (1990) 239–266Google Scholar
  13. 13.
    Shapiro E. Y., Algorithmic Program Debugging, MIT Press (1983)Google Scholar
  14. 14.
    Tamaki H.. and Sato T., ”Unfold/Fold Transformations of Logic Programs”, Proceedings of the Second International Logic Programming Conference, Uppsala University, Uppsala, Sweden (1984) 127–138Google Scholar
  15. 15.
    Wogulis J., ”Revising Relational Domain Theories”, Machine Learning: Proceedings of the Eighth International Workshop, Morgan Kaufmann (1991) 462–466Google Scholar
  16. 16.
    Wrobel S., ”On the Proper Definition of Minimality in Specialization and Theory Revision”, Proceedings of the European Conference on Machine Learning, Springer-Verlag (1993) 65–82Google Scholar

Copyright information

© Springer-Verlag 1995

Authors and Affiliations

  • Henrik Boström
    • 1
  1. 1.Dept. of Computer and Systems SciencesStockholm UniversityKistaSweden

Personalised recommendations