A generic algorithm for learning rules with hierarchical exceptions

  • Tobias Scheffer
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 991)


An algorithm for learning ripple down rules, that is rules with hierarchical exceptions, is presented. The algorithm is generic with respect to the set of possible conditions; conditions are manipulated by an abstract generalization operator only. A specialization of the algorithm is shown that learns classification rules in real-valued attribute space; it is compared to other machine learning, neural network, and statistical algorithms. Learning algorithms for graphs or first order logics can be derived as well.


Machine learning classification exceptions 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    J. Cendrowska. An algorithm for inducing modular rules. International Journal for Man-Machine Studies, pages 349–370, 1987.Google Scholar
  2. 2.
    P. Clark and T. Niblett. The cn2 induction algorithm. Machine Learning, 3:261–283, 1989.Google Scholar
  3. 3.
    P. Compton and R. Jansen. Knowledge in context: a strategy for expert system maintainance. In Proceedings of the 2nd Australian Joint Artificial Intelligence Conference, volume 406 of LNAI, pages 292–306, Adelaide, 1988. Springer.Google Scholar
  4. 4.
    Y. Dimopoulos and A. Kakas. Learning non-monotonic logic programs: Learning exceptions. In N. Lavrac and S. Wrobel, editors, Machine Learning: ECML-95 (Proc. European Conf. on Machine Learning, 1995), Lecture Notes in Artificial Intelligence 912, pages 122–137, Berlin, Heidelberg, New York, 1995. Springer Verlag.Google Scholar
  5. 5.
    B. R. Gaines and P. J. Compton. Induction of ripple down rules. 5th Australian Conference on Artificial Intelligence, pages 349–355, 1992.Google Scholar
  6. 6.
    R. M. Goodman and P. Smyth. The induction of probabilistic rule sets — the itrule algorithm. In B. Spatz, editor, Proceedings of the Sixth International Workshop on Machine Learning, pages 129–132, San Mateo, 1989. CA: Morgan Kaufmann.Google Scholar
  7. 7.
    D. Helmbold, R. Sloan, and M. K. Warmuth. Learning nested differences of intersection-closed concept classes. Proceedings of the Workshop on Computational Learning Theory, pages 41–56, 1989.Google Scholar
  8. 8.
    J. Kivinen, H. Mannila, and E. Ukkonen. Learning hierarchical rule sets. Proceedings of the ACM Workshop of Computational Learning Theory, 1993.Google Scholar
  9. 9.
    R. S. Michalski. Pattern recognition as rule-guided inductive inference. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2(4):349–361, July 1980.Google Scholar
  10. 10.
    D. Michie. Problems of computer aided concept formation. In J. R. Quinlan, editor, Applications of Expert Systems, volume 2, pages 310–333, 1989.Google Scholar
  11. 11.
    D. Michie, D. J. Spiegelhalter, and C. C. Taylor. Machine Learning, Neural and Statistical Classification. Ellis Horwood, 1994.Google Scholar
  12. 12.
    S. Muggleton. Inductive Logic Programming. Volume 38 of A.P.I.C. series. Academic Press Ltd., London, 1992.Google Scholar
  13. 13.
    J. R. Quinlan. Induction of decision trees. Machine Learning, 1(1), 1986.Google Scholar
  14. 14.
    R. L. Rivest. Learning decision lists. Machine Learning, 2(2):229–246, 1987.Google Scholar
  15. 15.
    T. Scheffer. Induktion hierarchischer Regelsysteme. Master's thesis, Technische Universität Berlin, 1995.Google Scholar
  16. 16.
    S. Unger and F. Wysotzki. Lernfähige Klassifizierungssysteme. Akademie Verlag Berlin, 1981.Google Scholar
  17. 17.
    S. A. Vere. Multilevel counterfactuals for generalizations of relational concepts and productions. Artificial Intelligence, 14:139–164, 1980.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1995

Authors and Affiliations

  • Tobias Scheffer
    • 1
  1. 1.Artificial Intelligence Research GroupTechnische Universität BerlinBerlin

Personalised recommendations