Regression and Classification Trees

Chapter

Abstract

Regression trees and classification trees are suggested as tools to be able to assess the appropriateness of covariates and factors, together with their interactions for linear models. The use of regression and classification trees is demonstrated using the same example dataset as was used in Chap. 15. Random forests, boosting and neural networks can also have benefits, and these are also briefly discussed.

Keywords

Random Forest Tree Model Regression Tree Bootstrap Sample Classification Tree 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Breiman, L. (2001). Random forests. Machine Learning, 45, 5–32.CrossRefGoogle Scholar
  2. Breiman, L., Friedman, J., Stone, C. J., & Olshen, R. A. (1984). Classification and regression trees. Boca Raton: Chapman and Hall/CRC, Monterey, CA.Google Scholar
  3. Friedman, J. H. (2001). Greedy function approximation: A gradient boosting machine. The Annals of Statistics, 29, 1189–1232.CrossRefGoogle Scholar
  4. Hastie, T., Tibshirani, R., & Friedman, J. H. (2001). The elements of statistical learning. NewYork: Springer.Google Scholar
  5. R Development Core Team. (2010). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0, URL http://www.R-project.org.
  6. Ripley, B. D. (1996). Pattern recognition and neural networks. New York: Cambridge University Press.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2012

Authors and Affiliations

  1. 1.Division of Biostatistics, Centre for Epidemiology and Biostatistics, Leeds Institute of Genetics, Health & TherapeuticsUniversity of LeedsLeedsUK

Personalised recommendations