Skip to main content

Extensions of the Classical Linear Model

  • Chapter
  • First Online:
Regression

Abstract

This chapter discusses several extensions of the classical linear model. We first describe in Sect. 4.1 the general linear model and its applications. This model allows for correlated errors and heteroscedastic variances of the errors. Section 4.2 discusses several techniques to regularize the least squares estimator. Such a regularization may be useful in cases where the design matrix is highly collinear or even rank deficient. Moreover, regularization techniques allow for built-in variable selection. Section 4.4 describes Bayesian linear models as an alternative to the frequentist linear model framework. In modern statistics, Bayesian approaches have become increasingly more important and widely used.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Bibliography

  • Barbieri, M., & Berger, J. (2004). Optimal predictive model selection. The Annals of Statistics, 32, 870–897.

    Article  MathSciNet  MATH  Google Scholar 

  • Bondell, H. D., & Reich, B. J. (2008). Simultaneous regression shrinkage, variable selection, and supervised clustering of predictors with OSCAR. Biometrics, 64, 115–123.

    Article  MathSciNet  MATH  Google Scholar 

  • Box, G., & Tiao, G. (1992). Bayesian inference in statistical analysis, New York: Wiley.

    Book  MATH  Google Scholar 

  • Breusch, T., & Pagan, A. (1979). A simple test for heteroscedasticity and random coefficient variation. Econometrica, 47, 1287–1294.

    Article  MathSciNet  MATH  Google Scholar 

  • Brockwell, P. J., & Davis, R. A. (2002). Introduction to time series and forecasting (2nd ed.). New York: Springer.

    Book  MATH  Google Scholar 

  • Bühlmann, P. (2006). Boosting for high-dimensional linear models. Annals of Statistics, 34, 559–583.

    Article  MathSciNet  MATH  Google Scholar 

  • Bühlmann, P., & Hothorn, T. (2007). Boosting algorithms: Regularization, prediction and model fitting (with discussion). Statistical Science, 22, 477–505.

    Article  MathSciNet  MATH  Google Scholar 

  • Bühlmann, P., & Yu, B. (2003). Boosting with the L2 loss: Regression and classification. Journal of the American Statistical Association, 98, 324–339.

    Article  MathSciNet  MATH  Google Scholar 

  • Durbin, J., & Watson, G. (1950). Testing for serial correlation in least squares regression–I. Biometrika, 37, 409–428.

    MathSciNet  MATH  Google Scholar 

  • Durbin, J., & Watson, G. (1951). Testing for serial correlation in least squares regression–II, Biometrika, 38, 159–178.

    MathSciNet  MATH  Google Scholar 

  • Durbin, J., & Watson, G. (1971). Testing for serial correlation in least squares regression–III, Biometrika, 58, 1–42.

    MathSciNet  MATH  Google Scholar 

  • Efron, B., Hastie, T., Johnstone, I., & Tibshirani, R. (2004). Least angle regression. Annals of Statistics, 32, 407–451.

    Article  MathSciNet  MATH  Google Scholar 

  • Fahrmeir, L., Kneib, T., & Konrath, S. (2010). Bayesian regularization in structured additive regression: A unifying perspective on shrinkage, smoothing and predictor selection. Statistics and Computing, 20, 203–219.

    Article  MathSciNet  Google Scholar 

  • Fan, J., & Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96, 1348–1360.

    Article  MathSciNet  MATH  Google Scholar 

  • Fernandez, C., Ley, E., & Steel, M. F. J. (2001). Benchmark priors for Bayesian model averaging. Journal of Econometrics, 100, 381–427.

    Article  MathSciNet  MATH  Google Scholar 

  • Foster, D., & George, E. (1994). The risk inflation criterion for multiple regression. The Annals of Statistics, 22, 1947–1975.

    Article  MathSciNet  MATH  Google Scholar 

  • Friedman, J. (2001). Greedy function approximation: A gradient boosting machine. The Annals of Statistics, 29, 1189–1232.

    Article  MathSciNet  MATH  Google Scholar 

  • Fu, W. J. (1998). Penalized regression: The bridge versus the LASSO. Journal of Computational and Graphical Statistics, 7, 397–416.

    MathSciNet  Google Scholar 

  • Gelman, A., Carlin, J., Stern, H., & Rubin, D. (2003). Bayesian data analysis. New York/ Boca Raton: Chapman & Hall/CRC.

    Google Scholar 

  • George, E., & Foster, D. (2000). Calibration and empirical Bayes variable selection. Biometrica, 87, 731–747.

    Article  MathSciNet  MATH  Google Scholar 

  • George, E., & Mc Culloch, R. (1993). Variable selection via Gibbs sampling. Journal of the American Statistical Association, 88, 881–889.

    Article  Google Scholar 

  • George, E., & Mc Culloch, R. (1997). Approaches for Bayesian variable selection. Statistica Sinica7, 339–373.

    Google Scholar 

  • Goeman, J. J. (2010). L1 penalized estimation in the Cox proportional hazards model. Biometrical Journal, 52, 70–84.

    MathSciNet  MATH  Google Scholar 

  • Greene, W. H. (2000). Econometric analysis (4th ed.). Upper Saddle River: Prentice Hall.

    Google Scholar 

  • Hamilton, J. D. (1994). Time series analysis. Princeton: Princeton University Press.

    MATH  Google Scholar 

  • Hastie, T. J., Tibshirani, R. J., & Friedman, J. (2009). The elements of statistical learning. Berlin: Springer.

    Book  MATH  Google Scholar 

  • Ishwaran, H., & Rao, S. (2003). Detecting differentially expressed genes in microarrays using Bayesian model selection. Journal of the American Statistical Association, 98, 438–455.

    Article  MathSciNet  MATH  Google Scholar 

  • Ishwaran, H., & Rao, S. (2005). Spike and slab variable selection: Frequentist and Bayesian strategies. Annals of Statistics, 33, 730–773.

    Article  MathSciNet  MATH  Google Scholar 

  • Judge, G., Griffith, W., Hill, R., Lütkepohl, H., & Lee, T. (1980). The theory and practice of econometrics. New York: Wiley.

    MATH  Google Scholar 

  • Leeflang, P. S. H., Wittink, D. R., Wedel, M., & Naert, P. A. (2000). Building models for marketing decisions. Dordrecht: Kluwer.

    Book  Google Scholar 

  • Ley, E., & Steel, F. (2009). On the effect of prior assumptions in Bayesian model averaging with applications to growth regression. Journal of Applied Econometrics, 24, 651–674.

    Article  MathSciNet  Google Scholar 

  • Liang, F., Paulo, R., Molina, G., Clyde, M., & Berger, J. (2008). Mixtures of g-priors for Bayesian variable selection. Journal of the American Statistical Association, 103, 410–423.

    Article  MathSciNet  MATH  Google Scholar 

  • Madigan, D., & York, J. (1995). Bayesian graphical models for discrete data. International Statistical Review, 63, 215–232.

    Article  MATH  Google Scholar 

  • Malsiner-Walli, G., & Wagner, H. (2011). Comparing spike and slab priors for Bayesian variable selection. Austrian Journal of Statistics, 40, 241–264.

    Google Scholar 

  • Miller, A. (2002). Subset selection in regression. New York/Boca Raton: Chapman & Hall/CRC.

    Book  MATH  Google Scholar 

  • O’Hagan, A. (1994). Kendall’s advanced theory of statistics vol. 2b: Bayesian inference. London: Arnold.

    Google Scholar 

  • Park, T., & Casella, G. (2008). The Bayesian lasso. Journal of the American Statistical Association, 103, 681–686.

    Article  MathSciNet  MATH  Google Scholar 

  • Rigby, R. A., & Stasinopoulos, D. M. (2005). Generalized additive models for location, scale and shape. Applied Statistics, 54, 507–554.

    MathSciNet  MATH  Google Scholar 

  • Sydsaeter, K., Hammond, P., Seierstad, A., & Strom, A. (2005). Further mathematics for economic analysis. Upper Saddle River: Prentice Hall.

    Google Scholar 

  • Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society B, 58, 267–288.

    MathSciNet  MATH  Google Scholar 

  • Tutz, G., & Binder, H. (2006). Generalized additive modelling with implicit variable selection by likelihood based boosting. Biometrics, 62, 961–971.

    Article  MathSciNet  MATH  Google Scholar 

  • White, H. (1980). A heteroscedasticity-consistent covariance matrix estimator and a direct test for heteroscedasticity. Econometrica, 48, 817–838.

    Article  MathSciNet  MATH  Google Scholar 

  • Zellner, A. (1986). On assessing prior distributions and Bayesian regression analysis with g-prior distributions. In P. Goel, & A. Zellner (Eds.). Bayesian inference and decision techniques: Essays in honour of Bruno de Finetti (pp. 233–243). Amsterdam: North-Holland.

    Google Scholar 

  • Zeugner, S. (2010). Bayesian model averaging with BMS. Technical report. Available at http://bms.zeugner.eu/tutorials/bms.pdf.

  • Zou, H., & Hastie, T. (2005). Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society Series B, 67, 301–320.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Fahrmeir, L., Kneib, T., Lang, S., Marx, B. (2013). Extensions of the Classical Linear Model. In: Regression. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-34333-9_4

Download citation

Publish with us

Policies and ethics