Advertisement

Machine Learning

, Volume 15, Issue 1, pp 25–41 | Cite as

The Importance of Attribute Selection Measures in Decision Tree Induction

  • W.Z. Liu
  • A.P. White
Article

Abstract

Recent work by Mingers and by Buntine and Niblett on the performance of various attribute selection measures has addressed the topic of random selection of attributes in the construction of decision trees. This article is concerned with the mechanisms underlying the relative performance of conventional and random attribute selection measures. The three experiments reported here employed synthetic data sets, constructed so as to have the precise properties required to test specific hypotheses. The principal underlying idea was that the performance decrement typical of random attribute selection is due to two factors. First, there is a greater chance that informative attributes will be omitted from the subset selected for the final tree. Second, there is a greater risk of overfitting, which is caused by attributes of little or no value in discriminating between classes being “locked in” to the tree structure, near the root. The first experiment showed that the performance decrement increased with the number of available pure-noise attributes. The second experiment indicated that there was little decrement when all the attributes were of equal importance in discriminating between classes. The third experiment showed that a rather greater performance decrement (than in the second experiment) could be expected if the attributes were all informative, but to different degrees.

decision trees noisy data induction attribute selection 

References

  1. Breiman, L., Friedman, J.H., Olshen, R.A., & Stone, C.J. (1984). Classification and regression trees. Monterey, CA: Wadsworth.Google Scholar
  2. Buntine, W., & Niblett, T. (1992). A further comparison of splitting rules for decision-tree induction. Machine Learning, 8, 75–86.Google Scholar
  3. Keppel, G. (1973). Design and analysis: A researcher's handbook. Englewood Cliffs, NJ: Prentice-Hall.Google Scholar
  4. Kononenko, I., Bratko, I., & Roskar, E. (1984). Experiments in automatic learning of medical diagnostic rules. (Technical Report). Jozef Stefan Institute, Ljubjana, Yugoslavia.Google Scholar
  5. Liu, W.Z., & White, A.P. (1991). A review of inductive learning. In I.M. Graham & R.W. Milne (Eds.), Research and development in expert systems VIII. Cambridge: Cambridge University Press.Google Scholar
  6. Mingers, J. (1989a). An empirical comparison of selection measures for decision-tree induction. Machine Learning, 3, 319–342.PubMedGoogle Scholar
  7. Mingers, J. (1989b). An empirical comparison of pruning methods for decision-tree induction. Machine Learning, 4, 227–243.Google Scholar
  8. Niblett, T., & Bratko, I. (1987). Learning decision rules in noisy domains. In M.A. Bramer (Ed.), Research and development in expert systems III. Cambridge: Cambridge University Press.Google Scholar
  9. Quinlan, J.R. (1986). Induction of decision trees. Machine Learning, 1, 81–106.Google Scholar
  10. Quinlan, J.R. (1988). Decision trees and multi-valued attributes. Machine Intelligence, 11, 305–318.Google Scholar
  11. Stevens, S.S. (1946). On the theory of scales of measurement. Science, 103, 677–680.Google Scholar
  12. White, A.P. (1985). PREDICTOR: an alternative approach to uncertain inference in expert systems. In Proceedings of the Ninth International Joint Conference on Artificial Intelligence (pp. 328–330). Los Altos: Morgan Kaufmann.Google Scholar
  13. White, A.P. (1987). Probabilistic induction by dynamic path generation in virtual trees. In M.A. Bramer (Ed.), Research and development in expert systems III. Cambridge: Cambridge University Press.Google Scholar
  14. White, A.P., & Liu, W.Z. (1990). Probabilistic induction by dynamic path generation for continuous variables. In T.R. Addis & R.M. Muir (Eds.), Research and development in expert systems VII. Cambridge: Cambridge University Press.Google Scholar

Copyright information

© Kluwer Academic Publishers 1994

Authors and Affiliations

  • W.Z. Liu
    • 1
  • A.P. White
    • 1
  1. 1.School of Computer ScienceUniversity of BirminghamBirminghamUnited Kingdom

Personalised recommendations