Skip to main content
Log in

An empirical study of PLAD regression using the bootstrap

  • Original Paper
  • Published:
Computational Statistics Aims and scope Submit manuscript

Abstract

Partial LAD regression uses the L 1 norm associated with least absolute deviations (LAD) regression while retaining the same algorithmic structure of univariate partial least squares (PLS) regression. We use the bootstrap in order to assess the partial LAD regression model performance and to make comparisons to PLS regression. We use a variety of examples coming from NIR experiments as well as two sets of experimental data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Denham M (2000). Choosing the number of factors in partial least squares regression: estimating and minimizing the mean squared error of prediction. J Chemom 14:351–361

    Article  Google Scholar 

  • Diaconis P, Efron B (1983). Computer-intensive methods in statistics. Sci Am 248:116–130

    Article  Google Scholar 

  • Dodge Y, Kondylis A, Whittaker J (2004). Extending PLS to PLAD regression and the use of the L1 norm in soft modelling. In: Antoch J (ed) Proceedings in computational statistics, COMPSTAT’04. Physica-Verlag/Springer, Heidelberg, pp 935–942

    Google Scholar 

  • Edgington ES (1995). Randomization tests. Marcel Dekker, New York

    MATH  Google Scholar 

  • Efron B, Gong G (1983). A leisurely look at the bootstrap, the jackknife, and cross-validation. Am Stat 37:36–48

    Article  MathSciNet  Google Scholar 

  • Efron B (1987). Better bootstrap confidence intervals. J Am Stat Assoc 82:171–185

    Article  MATH  MathSciNet  Google Scholar 

  • Efron B, Tibshirani R (1993). An introduction to the bootstrap. Chapman and Hall, New York

    MATH  Google Scholar 

  • Efron B, Tibshirani R (1997). Improvements on cross-validation: the .632+ bootstrap method. J Am Stat Assoc 92:548–560

    Article  MATH  MathSciNet  Google Scholar 

  • Fearn T (1983). A Missue of ridge regression in the calibration of a near infrared reflectance instrument. Appl Stat 32:73–79

    Article  Google Scholar 

  • Gnanadesikan R, Kettenring JR (1972). Robust estimates, residuals, and outlier detection with multiresponse data. Biometrics 28:81–124

    Article  Google Scholar 

  • Huber PJ (1981). Robust statistics. Wiley, New York

    MATH  Google Scholar 

  • Kondylis A, Whittaker J (2005). Using the bootstrap on PLAD regression. In: Aluja T, Casanovas J, Vinzi VE, Morineau A, Tenehaus M (eds) PLS and related methods. Proceedings of the PLS’05 international symposium, Barcelona, pp 395–402

  • Martens H, Naes T (1989). Multivariate calibration. Wiley, UK

    MATH  Google Scholar 

  • Naes T (1985). Multivariate calibration when the error covariance matrix is structured. Technometrics 27:301–311

    Article  MathSciNet  Google Scholar 

  • Shao J (1993). Linear model selection by cross-validation. J Am Stat Assoc 88:486–494

    Article  MATH  Google Scholar 

  • Tenenhaus M (1998). La régression PLS. Théorie et pratique. Technip, Paris

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Athanassios Kondylis.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Kondylis, A., Whittaker, J. An empirical study of PLAD regression using the bootstrap. Computational Statistics 22, 307–321 (2007). https://doi.org/10.1007/s00180-007-0034-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00180-007-0034-3

Keywords

Navigation