Skip to main content

A data-driven approach to feature construction

  • Conference paper
  • First Online:
Methodologies for Intelligent Systems (ISMIS 1994)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 869))

Included in the following conference series:

Abstract

This paper presents a general scheme for feature construction and its application to decision trees. In this scheme, a higher level attribute is constructed from two lower level ones under the guidance of the distributions of the examples from different classes. It can be used with different selective induction algorithms such as decision tree learning, rule learning and instance-based learning. A simple prototype has been implemented and integrated into a decision tree learning system. The experimental results empirically show that our approach outperforms the standard decision tree learning algorithm on three domains tested.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Clark, P., and Niblett, T. (1989). The CN2 induction algorithm. Machine Learning, 3.

    Google Scholar 

  2. Drastal, G., Czako, G., and Raatz, S. (1989). Induction in an abstraction space: a form of constructive induction. Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, Deroit, MI.

    Google Scholar 

  3. Fayyad, U.M., Irani, K.B. (1992). On the handling of continuous-valued attributes in decision tree generation. Machine Learning 8(1).

    Google Scholar 

  4. Matheus, C.J. (1989). Feature Construction: An Analytical Framework and an Application to Decision Trees. PhD thesis, University of Illinois at Urbana-Champaign.

    Google Scholar 

  5. Michalski, R.S. (1983). A theory and methodology of inductive learning. In Machine learning: an artificial intelligence approach. Edited by R.S. Michalski, J.G. Carbonell, and T.M. Mitchell. Tioga Publishing Co., Palo Alto, CA.

    Google Scholar 

  6. Pagallo, G. (1990). Adaptive Decision Tree Algorithms for Learning from Examples. Ph.D thesis, University of California at Santa Cruz.

    Google Scholar 

  7. Quinlan, J.R. (1986). Induction of decision trees. Machine Learning 1(1).

    Google Scholar 

  8. Rendell, L.A., and Seshu, R. (1990). Learning hard concepts through constructive induction: framework and rationale. Computational Intelligence, 6.

    Google Scholar 

  9. Schlimmer, J.C. (1987). Learning and representation change. In proceedings of the Sixth National Conference on Artificial Intelligence.

    Google Scholar 

  10. Utgoff, P.E. (1986). Shift of bias for inductive concept learning. In Machine learning: an artificial intelligence approach, Vol. II. Edited by R.S. Michalski, J.G. Carbonell, and T.M. Mitchell. Tioga Publishing Co., Palo Alto, CA.

    Google Scholar 

  11. Utgoff, P.E. (1990). An incremental method for finding multivariate splits for decision trees. Proceedings of the Seventh International Conference on Machine Learning, Austin, TX.

    Google Scholar 

  12. Yang, D.S., Rendell, L.A., and Blix, G. (1991). A Scheme for Feature Construction and a Comparison of Empirical Methods. Proceedings of the Twelfth Internation Joint Conference on Artificial Intelligence, Sydney, Australia.

    Google Scholar 

  13. Wnek, J., and Michalski, R.S. (1993). Hypothesis-driven constructive induction in AQ17: A method and experiments. Machine Learning, Special Issue on Evaluating and Changing Representations.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Zbigniew W. RaÅ› Maria Zemankova

Rights and permissions

Reprints and permissions

Copyright information

© 1994 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Zhang, J., Lu, HH. (1994). A data-driven approach to feature construction. In: RaÅ›, Z.W., Zemankova, M. (eds) Methodologies for Intelligent Systems. ISMIS 1994. Lecture Notes in Computer Science, vol 869. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-58495-1_45

Download citation

  • DOI: https://doi.org/10.1007/3-540-58495-1_45

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-58495-7

  • Online ISBN: 978-3-540-49010-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics