Advertisement

Variable Selection Using Random Forests

  • Marco Sandri
  • Paola Zuccolotto
Part of the Studies in Classification, Data Analysis, and Knowledge Organization book series (STUDIES CLASS)

Abstract

One of the main topic in the development of predictive models is the identification of variables which are predictors of a given outcome. Automated model selection methods, such as backward or forward stepwise regression, are classical solutions to this problem, but are generally based on strong assumptions about the functional form of the model or the distribution of residuals. In this pa-per an alternative selection method, based on the technique of Random Forests, is proposed in the context of classification, with an application to a real dataset.

Keywords

Variable Selection Random Forest Real Dataset Misclassification Error Variable Selection Method 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. AUSTIN, P. and TU, J. (2004): Bootstrap methods for developing predictive mod-els. The American Statistician, 58, 131–137.CrossRefMathSciNetGoogle Scholar
  2. BREIMAN, L., FRIEDMAN, J.H., OLSHEN, R.A. and STONE, C.J. (1984): Classification and Regression Trees. Chapman & Hall, London.zbMATHGoogle Scholar
  3. BREIMAN, L. (1996a): The heuristic of instability in model selection. Annals of Statistics, 24, 2350–2383.zbMATHCrossRefMathSciNetGoogle Scholar
  4. BREIMAN, L. (1996b): Bagging predictions. Machine Learning, 24, 123–140.zbMATHMathSciNetGoogle Scholar
  5. BREIMAN, L. (2001a): Random Forests. Machine Learning, 45, 5–32.zbMATHCrossRefGoogle Scholar
  6. BREIMAN, L. (2001b): Statistical modeling: the two cultures. Statistical Science, 16, 199–231.zbMATHCrossRefMathSciNetGoogle Scholar
  7. BREIMAN, L. (2002): Manual on setting up, using, and understanding Random Forests v3.1. Technical Report, http://oz.berkeley.edu/users/breiman.Google Scholar
  8. DIETTERICH, T. (2000): An experimental comparison of three methods for con-struction ensembles of decision trees: bagging, boosting and randomization. Machine Learning, 40, 139–157.CrossRefGoogle Scholar
  9. ENNIS, M., HINTON, G., NAYLOR, D., REVOW, M. and TIBSHIRANI, R. (1998): A comparison of statistical learning methods on the gusto database. Statistics in Medicine, 17, 2501–2508.CrossRefGoogle Scholar
  10. GUGLIELMI, A., RUZZENENTE, A., SANDRI, M., KIND, R., LOMBARDO, F., RODELLA, L., CATALANO, F., DE MANZONI, G. and CORDIANO, C. (2002): Risk assessment and prediction of rebleeding in bleeding gastroduodenal ulcer. Endoscopy, 34, 771–779.CrossRefGoogle Scholar
  11. HOCKING, R.R.. (1976): The analysis and selection of variables in linear regression. Biometrics, 42, 1–49.CrossRefMathSciNetGoogle Scholar
  12. MILLER, A.J. (1984): Selection of subsets of regression variables. Journal of the Royal Statistical Society, Series A, 147, 389–425.zbMATHCrossRefGoogle Scholar
  13. SANDRI, M. and ZUCCOLOTTO, P. (2004): Classification with Random Forests: the theoretical framework. Rapporto di Ricerca del Dipartimento Metodi Quantitativi, Università degli Studi di Brescia, 235.Google Scholar
  14. SANDRI, M. and ZUCCOLOTTO, P. (2006): Analysis of a bias effect on a tree-based variable importance measure. Evaluation of an empirical adjustment strategy. Manuscript.Google Scholar

Copyright information

© Springer-Verlag Heidelberg 2006

Authors and Affiliations

  • Marco Sandri
    • 1
  • Paola Zuccolotto
    • 1
  1. 1.Dipartimento Metodi QuantitativiUniversità di BresciaBresciaItaly

Personalised recommendations