Skip to main content

Advertisement

SpringerLink
Log in
Menu
Find a journal Publish with us Track your research
Search
Cart
  1. Home
  2. Machine Learning: ECML 2001
  3. Conference paper

Backpropagation in Decision Trees for Regression

  • Conference paper
  • First Online: 01 January 2001
  • pp 348–359
  • Cite this conference paper
Machine Learning: ECML 2001 (ECML 2001)
Backpropagation in Decision Trees for Regression
  • Victor Medina-Chico3,
  • Alberto Suárez3 &
  • James F. Lutsko4 

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2167))

Included in the following conference series:

  • European Conference on Machine Learning

Abstract

A global optimization algorithm is designed to find the parameters of a CART regression tree extended with linear predictors at its leaves. In order to render the optimization mathematically feasible, the internal decisions of the CART tree are made continuous. This is accomplished by the replacement of the crisp decisions at the internal nodes of the tree with soft ones. The algorithm then adjusts the parameters of the tree in a manner similar to the back propagation algorithm in multilayer perceptrons. With this procedure it is possible to generate regression trees optimized with a global cost function, which give a continuous representation of the unknown function, and whose architecture is automatically fixed by the data. The integration in one decision system of complementary features of symbolic and connectionist methods leads to improvements in prediction efficiency in both synthetic and real-world regression problems.

Download to read the full chapter text

Chapter PDF

Similar content being viewed by others

Backpropagation for Fully Connected Cascade Networks

Article 25 January 2017

Decision Tree as Neural Network

Chapter © 2021

An unsupervised learning approach for multilayer perceptron networks

Article 26 November 2018

References

  1. Breiman, L., Friedman, J.H., Olshen, R.A. and Stone, C.J.: Classification and Regression Trees. Chapman & Hall, New York (1984).

    MATH  Google Scholar 

  2. Breiman, L., Bagging Predictors: Machine Learning, 24 (1996) 123–140.

    MATH  MathSciNet  Google Scholar 

  3. Breiman, L.: Bias, Variance and Arcing Classifiers. Technical Report 460, Statistics Department, University of California, (1996).

    Google Scholar 

  4. Breiman, L.: Arcing Classifiers (with Discussion). The Annals of Statistics, 24 (1998) 2350–2383.

    MathSciNet  Google Scholar 

  5. Breiman, L.: Randomizing Outputs to Increase Prediction Accuracy. Machine Learning, 40 (2000) 229–242.

    Article  MATH  Google Scholar 

  6. Breiman, L.: Private Communication.

    Google Scholar 

  7. Cherkassky, V. and Muller, F.: Statistical and Neural Network Techniques for Non-parametric Regression. In: Cheeseman, P.W. and Oldford, R.W. (eds.): Selecting Models from Data. Springer-Verlag, New York (1994) 383–392..

    Google Scholar 

  8. Freund, Y. and Schapire, R.E.: Experiments with a New Boosting Algorithm. In Machine Learning: Proc. 13th International Conference. Morgan-Kaufmann, San Francisco (1996) 148–156.

    Google Scholar 

  9. Friedman, J.H.: Multivariate Adaptative Regression Splines (with Discussion). The Annals of Statistics, 19 (1991) 1–141.

    Article  MATH  MathSciNet  Google Scholar 

  10. Gelfand, S.B., Ravishankar, C.S. and Delp, E.J.: An Iterative Growing and Pruning Algorithm for Classification Tree Design. IEEE Trans. Pattern Analysis and Machine Intelligence, 13, 2 (1991) 163–174.

    Article  Google Scholar 

  11. Geman, S., Bienenstock, E. and Doursat, R.: Neural Networks and the Bias/Variance Dilemma. Neural Computation, 4 (1992) 1–58.

    Article  Google Scholar 

  12. Quinlan, J. R. Learning with continuous classes, Proceedings of the Australian Joint Conference on Artificial Intelligence (1992) 343–348.

    Google Scholar 

  13. Jennrich, R.E.: Stepwise Regression. In: Statistical Methods for Digital Computers. Wiley, New York (1977) 58–75.

    Google Scholar 

  14. Jordan, M.I. and Jacobs, R.A.: Hierarchical Mixtures of Experts and the EM algorithm. Neural Computation, 6 (1994) 181–214.

    Article  Google Scholar 

  15. Kohavi, R. and Wolpert, D.H.: Bias Plus Variance Decomposition for Zero-one Loss Functions. In Machine Learning, Proc. 13th International Conference. Morgan-Kaufmann, San Francisco (1996) 275–283.

    Google Scholar 

  16. Kong, E.B. and Dietterich, T.G.: Error-correcting Output Coding Corrects Bias and Variance. In Proc. 12th International Conference on Machine Learning. Morgan-Kaufmann, San Francisco (1995) 313–321.

    Google Scholar 

  17. Press, W. Teukolsky, W.T., Vetterling, S.A. and Flannery, B.: Numerical Recipes C: The Art of Scientific Computing. Cambridge Univ. Press, Cambridge (1993).

    Google Scholar 

  18. Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo (1993).

    Google Scholar 

  19. Quinlan, J.R.: Induction of Decision Trees. Machine Learning 1, 1 (1986) 81–106.

    Google Scholar 

  20. Schapire, R.E., Freund, Y. Bartlett, P. and Lee, W.S.: Boosting the Margin: a New Explanation for the Effectiveness of Voting Methods. The Annals of Statistics 26 5 (1998) 1651–1686.

    Article  MATH  MathSciNet  Google Scholar 

  21. Suárez, A. and Lutsko, J.F.: Globally Optimal Fuzzy Decision Trees for Classification and Regression. IEEE Trans. Pattern Analysis and Machine Intelligence 21 12 (1999) 1297–1311.

    Article  Google Scholar 

  22. Suárez, A. and Lutsko, J.F.: Automatic Induction of Piecewise Linear Models with Decision Trees. In Proc. International Conference on Artificial Intelligence, Vol 2. H.R. Arabnia ed. Las Vegas, (2000) 1025–1031.

    Google Scholar 

Download references

Author information

Authors and Affiliations

  1. Escuela Técnica Superior de Informática, Universidad Autónoma de Madrid, Ciudad Universitaria de Cantoblanco, 28049, Madrid, Spain

    Victor Medina-Chico & Alberto Suárez

  2. Center for Nonlinear Phenomena and Complex Systems, Université Libre de Bruxelles, C.P. 231 Campus Plaine, B-1050, Brussels, Belgium

    James F. Lutsko

Authors
  1. Victor Medina-Chico
    View author publications

    You can also search for this author in PubMed Google Scholar

  2. Alberto Suárez
    View author publications

    You can also search for this author in PubMed Google Scholar

  3. James F. Lutsko
    View author publications

    You can also search for this author in PubMed Google Scholar

Editor information

Editors and Affiliations

  1. Department of Computer Science, Albert-Ludwigs University Freiburg, Georges Köhler-Allee, Geb. 079, 79110, Freiburg, Germany

    Luc De Raedt

  2. Department of Computer Science, University of Bristol, Merchant Ventures Bldg., Woodland Road, Bristol, BS8 1UB, UK

    Peter Flach

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Medina-Chico, V., Suárez, A., Lutsko, J.F. (2001). Backpropagation in Decision Trees for Regression. In: De Raedt, L., Flach, P. (eds) Machine Learning: ECML 2001. ECML 2001. Lecture Notes in Computer Science(), vol 2167. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44795-4_30

Download citation

  • .RIS
  • .ENW
  • .BIB
  • DOI: https://doi.org/10.1007/3-540-44795-4_30

  • Published: 30 August 2001

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-42536-6

  • Online ISBN: 978-3-540-44795-5

  • eBook Packages: Springer Book Archive

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Publish with us

Policies and ethics

Search

Navigation

  • Find a journal
  • Publish with us
  • Track your research

Discover content

  • Journals A-Z
  • Books A-Z

Publish with us

  • Journal finder
  • Publish your research
  • Open access publishing

Products and services

  • Our products
  • Librarians
  • Societies
  • Partners and advertisers

Our imprints

  • Springer
  • Nature Portfolio
  • BMC
  • Palgrave Macmillan
  • Apress
  • Your US state privacy rights
  • Accessibility statement
  • Terms and conditions
  • Privacy policy
  • Help and support
  • Cancel contracts here

167.114.118.212

Not affiliated

Springer Nature

© 2024 Springer Nature