Advertisement

Handling Continuous-Valued Attributes in Decision Tree with Neural Network Modeling

  • DaeEun Kim
  • Jaeho Lee
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1810)

Abstract

Induction tree is useful to obtain a proper set of rules for a large amount of examples. However, it has difficulty in obtaining the relation between continuous-valued data points. Many data sets show significant correlations between input variables, and a large amount of useful information is hidden in the data as nonlinearities. It has been shown that neural network is better than direct application of induction tree in modeling nonlinear characteristics of sample data. It is proposed in this paper that we derive a compact set of rules to support data with input variable relations. Those relations as a set of linear classifiers can be obtained from neural network modeling based on back-propagation. This will also solve overgeneralization amd overspecialization problems often seen in induction tree. We have tested this scheme over several data sets to compare with decision tree results.

Keywords

Neural Network Hide Layer Neural Network Modeling Logic Gate Induction Tree 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    C. Blake, E. Keogh, and C.J. Merz. UCI repository of machine learning databases. In Preceedings of the Fifth International Conference on Machine Learning, http://www.ics.uci.edu/mlearn, 1998. 216
  2. 2.
    G. Cybenko. Continuous valued neural networks with two hidden layers are sufficient. Technical report, Technical Report, Department of Computer Science, Tufts University, Medford, MA, 1988. 214Google Scholar
  3. 3.
    T.G. Dietterich, H. Hild, and G. Bakiri. A comparative study of id3 and backpropagation for english text-to-speech mapping. In Proceedings of the 1990 Machine Learning Conference, pages 24–31. Austin, TX, 1990. 212Google Scholar
  4. 4.
    U.M. Fayyad and K.B. Irani. Multi-interval discretization of continuous-valued attributes for classification learning. In Proceedings of IJCAI’93, pages 1022–1027. Morgan Kaufmann, 1993. 211Google Scholar
  5. 5.
    L. Fu. Neural Networks in Computer Intelligence. McGraw-Hill, New York, 1994. 212Google Scholar
  6. 6.
    S.S. Haykin. Neural networks: a comprehensive foundation. Prentice Hall, Upper Saddle River, N.J., 2nd edition, 1999. 213zbMATHGoogle Scholar
  7. 7.
    J. Hertz, R.G. Palmer, and A.S. Krogh. Introduction to the Theory of Neural Computation. Addision Wesley, Redwood City, Calif., 1991. 214Google Scholar
  8. 8.
    K. Hornik, M. Stinchrombe, and H. White. Multilayer feedforward networks are universal approximators. Neural Networks, 2(5):359–366, 1989.CrossRefGoogle Scholar
  9. 9.
    K.B. Irani and Qian Z. Karsm: A combined response surface / knowledge acquisition approach for deriving rules for expert system. In TECHCON’90 Conference, pages 209–212, 1990. 212Google Scholar
  10. 10.
    DaeEun Kim. Knowledge acquisition based on neural network modeling. Technical report, Directed Study, The University of Michigan, Ann Arbor, 1991. 212Google Scholar
  11. 11.
    Zvi Kohavi. Switching and Finite Automata Theory. McGraw-Hill, New York, London, 1970. 217zbMATHGoogle Scholar
  12. 12.
    W. Kweldo and M. Kretowski. An evolutionary algorithm using multivariate discretization for decision rule induction. In Proceedings of the Third European Conference on Principles of Data Mining and Knowledge Discovery, pages 392–397. Springer, 1999. 211Google Scholar
  13. 13.
    J.R. Quinlan. Comparing connectionist and symbolic learning methods. In Computational Learning Theory and Natural Learning Systems, pages 445–456. MIT Press, 1994. 212Google Scholar
  14. 14.
    J.R. Quinlan. Improved use of continuous attributes in c4.5. Journal of Artificial Intelligence Approach, (4):77–90, 1996. 211, 212, 216zbMATHGoogle Scholar
  15. 15.
    R. Setiono and Huan Lie. Symbolic representation of neural networks. Computer, 29(3):71–77, 1996. 212CrossRefGoogle Scholar
  16. 16.
    G.G. Towell and J.W. Shavlik. Extracting refined rules from knowledge-based neural networks. Machine Learning, 13(1):71–101, Oct. 1993. 212Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • DaeEun Kim
    • 1
  • Jaeho Lee
    • 2
  1. 1.Division of InformaticsUniversity of EdinburghEdinburghUK
  2. 2.Department of Electrical EngineeringUniversity of SeoulSeoulKorea

Personalised recommendations