Should decision trees be learned from examples or from decision rules?

  • Ibrahim F. Imam
  • Ryszard S. Michalski
Learning and Adaptive Systems I
Part of the Lecture Notes in Computer Science book series (LNCS, volume 689)


A standard method for determining decision trees is to learn them from examples. A disadvantage of this approach is that once a decision tree is learned, it is difficult to modify it to suit different decision making situations. An attractive approach that avoids this problem is to learn and store knowledge in a declarative form, e.g., as decision rules, and then, whenever needed, generate from it a decision free that is most suitable in any given situation. This paper describes an efficient method for this purpose, called AQDT-1, which takes decision rules generated by the learning system AQ15 and builds from them a decision tree optimized according to a given quality criterion. The method is able to build conventional decision trees, as well as the so-called “skip noder” trees, in which measuring attributes assigned to some nodes may be avoided. It is shown that “skip-node” trees can be significantly simpler than conventional ones. In the experiments comparing AQDT-1 with C4.5, the former outperformed the latter both in terms of the predictive accuracy as well as the simplicity of the generated decision trees.

Key words

machine learning inductive learning decision trees decision rules 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Bratko, I. & Lavrac, N. (Eds.), Progress in Machine Learning, Sigma Wilmslow, England, Press, 1987.Google Scholar
  2. Bratko, I. & Kononenko, L. “Learning Diagnostic Rules from Incomplete and Noisy Data,” Interactions in AI and statistics, B. Phelps, (edt.), Gower Technical Press, 1987Google Scholar
  3. Breiman, L., Friedman, J.H., Olshen, R.A. & Stone, C.J., “Classification and Regression Trees,” Belmont, California: Wadsworth Int. Group, 1984.Google Scholar
  4. Cestnik, B. & Karalic, A., “The Estimation of Probabilities in Attribute Selection Measures for Decision Tree Induction”, Proceeding of the European Summer School on Machine Learning, July 22–31, Priory Corsendonk, Belgium, 1991.Google Scholar
  5. Clark, P. & Niblett, T. “Induction in Noisy Domains,” Progress in Machine Learning, I. Bratko and N. Lavrac, (Eds.), Sigma Press, Wilmslow, 1987.Google Scholar
  6. Hunt, E., Marin, J. & Stone, P., Experiments in induction, NY: Academic Press, 1966.Google Scholar
  7. Michalski, R.S. “AQVAL/1-Computer Implementation of a Variable-Valued Logic System VL1 and Examples of its Application to Pattern Recognition,” Proceeding of the First International Joint Conference on Pattern Recognition, pp. 3–17, 1973.Google Scholar
  8. Michalski, R.S., Mozetic, I., Hong, J. & Lavrac, N., The Multi-Purpose Incremental Learning System AQ15 and Its Testing Application to Three Medical Domains,” Proceedings of AAAI-86, Philadelphia, PA, 1986.Google Scholar
  9. Michalski, R.S., “Learning Flexible Concepts: Fundamental Ideas and a Method Based on Two-tiered Representation,” Machine Learning: An Artificial Intelligence Approach, Vol. III, Y.Kodratoff & R.S.Michalski (Eds.), Morgan Kaufmann, pp. 63–111, 1990.Google Scholar
  10. Mingers, J., “An Empirical Comparison of selection Measures for Decision-Tree Induction,” Machine Learning, pp.319–342, Vol. 3, No.4, Kluwer Academic Pub., 1989.Google Scholar
  11. Niblett, T. & Bratko, I., “Learning Decision Rules in Noisy Domains,” Proceeding Expert Systems 86, Brighton, Cambridge: Cambridge University Press, 1986.Google Scholar
  12. Quinlan, J.R., “Discovering Rules By Induction from Large Collections of Examples,” Expert Systems in the Microelectronic Age, Ed. D. Michie, Edinburgh Unv. Press, 1979.Google Scholar
  13. Quinlan, J.R., “Learning Efficient Classification Procedures and Their Application to Chess End Games,” R.S. Michalski, J.G. Carbonell and T.M. Mitchell, (Eds.), Machine Learning: An Artificial Intelligence Approach. Los Altos: Morgan Kaufmann, 1983.Google Scholar
  14. Quinlan, J.R., “Induction of Decision Trees,”Machine Learning Vol. 1, No. 1, Kluwer Academic Publishers, 1986.Google Scholar
  15. Smyth, P., Goodman, R.M. & Higgins, C., “A Hybrid Rule-based/Bayesian Classifier,” Proceedings of ECAI 90, Stockholm, August, 1990.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1993

Authors and Affiliations

  • Ibrahim F. Imam
    • 1
  • Ryszard S. Michalski
    • 1
  1. 1.Center for Artificial IntelligenceGeorge Mason UniversityUSA

Personalised recommendations