Advertisement

A Simple Bayesian Algorithm for Feature Ranking in High Dimensional Regression Problems

  • Enes Makalic
  • Daniel F. Schmidt
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7106)

Abstract

Variable selection or feature ranking is a problem of fundamental importance in modern scientific research where data sets comprising hundreds of thousands of potential predictor features and only a few hundred samples are not uncommon. This paper introduces a novel Bayesian algorithm for feature ranking (BFR) which does not require any user specified parameters. The BFR algorithm is very general and can be applied to both parametric regression and classification problems. An empirical comparison of BFR against random forests and marginal covariate screening demonstrates promising performance in both real and artificial experiments.

Keywords

Random Forest Credible Interval Ranking Method Generalisation Error Feature Ranking 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Breiman, L.: Better subset regression using the nonnegative garrote. Technometrics 37, 373–384 (1995)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Tibshirani, R.: Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society (Series B) 58(1), 267–288 (1996)MathSciNetzbMATHGoogle Scholar
  3. 3.
    Zou, H., Hastie, T.: Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society (Series B) 67(2), 301–320 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Zou, H.: The adaptive lasso and its oracle properties. Journal of the American Statistical Association 101(476), 1418–1429 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    James, G.M., Radchenko, P.: A generalized Dantzig selector with shrinkage tuning. Biometrika 96(2), 323–337 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Fan, J., Samworth, R., Wu, Y.: Ultrahigh dimensional feature selection: Beyond the linear model. Journal of Machine Learning Research 10, 2013–2038 (2009)MathSciNetzbMATHGoogle Scholar
  7. 7.
    Hall, P., Miller, H.: Using generalized correlation to effect variable selection in very high dimensional problems. Journal of Computational and Graphical Statistics 18(3), 533–550 (2009)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Efron, B., Hastie, T., Johnstone, I., Tibshirani, R.: Least angle regression. The Annals of Statistics 32(2), 407–451 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Friedman, J., Hastie, T., Höfling, H., Tibshirani, R.: Pathwise coordinate optimization. The Annals of Applied Statistics 1(2), 302–332 (2007)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Zou, H., Hastie, T., Tibshirani, R.: On the “degrees of freedom” of the lasso. The Annals of Statistics 35(5), 2173–2192 (2007)MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Leng, C., Lin, Y., Wahba, G.: A note on the lasso and related procedures in model selection. Statistica Sinica 16(4), 1273–1284 (2006)MathSciNetzbMATHGoogle Scholar
  12. 12.
    Park, T., Casella, G.: The Bayesian lasso. Journal of the American Statistical Association 103(482), 681–686 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Kyung, M., Gill, J., Ghosh, M., Casella, G.: Penalized regression, standard errors, and Bayesian lassos. Bayesian Analysis 5(2), 369–412 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Breiman, L.: Random forests. Machine Learning 45(1), 5–32 (2001)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Enes Makalic
    • 1
  • Daniel F. Schmidt
    • 1
  1. 1.Centre for MEGA EpidemiologyThe University of MelbourneCarltonAustralia

Personalised recommendations