Matrix Algebra pp 399-458 | Cite as

Selected Applications in Statistics

  • James E. Gentle
Part of the Springer Texts in Statistics book series (STS)


Data come in many forms. In the broad view, the term “data” embraces all representations of information or knowledge. There is no single structure that can efficiently contain all of these representations. Some data are in free-form text (for example, the Federalist Papers, which was the subject of a famous statistical analysis), other data are in a hierarchical structure (for example, political units and subunits), and still other data are encodings of methods or algorithms. (This broad view is entirely consistent with the concept of a “stored-program computer”; the program is the data.)


  1. Anderson, T. W. 1951. Estimating linear restrictions on regression coefficients for multivariate nomal distributions. Annals of Mthematical Statistics 22:327–351.CrossRefzbMATHGoogle Scholar
  2. Atkinson, A. C., and A. N. Donev. 1992. Optimum Experimental Designs. Oxford, United Kingdom: Oxford University Press.zbMATHGoogle Scholar
  3. Beaton, Albert E., Donald B. Rubin, and John L. Barone. 1976. The acceptability of regression solutions: Another look at computational accuracy. Journal of the American Statistical Association 71:158–168.CrossRefzbMATHGoogle Scholar
  4. Cragg, John G., and Stephen G. Donald. 1996. On the asymptotic properties of LDU-based tests of the rank of a matrix. Journal of the American Statistical Association 91:1301–1309.MathSciNetCrossRefzbMATHGoogle Scholar
  5. Cullen, M. R. 1985. Linear Models in Biology. New York: Halsted Press.zbMATHGoogle Scholar
  6. Devlin, Susan J., R. Gnanadesikan, and J. R. Kettenring. 1975. Robust estimation and outlier detection with correlation coefficients. Biometrika 62:531–546.CrossRefzbMATHGoogle Scholar
  7. Draper, Norman R., and Harry Smith. 1998. Applied Regression Analysis, 3rd ed. New York: John Wiley and Sons.zbMATHGoogle Scholar
  8. Efron, Bradley, Trevor Hastie, Iain Johnstone, and Robert Tibshirani. 2004. Least angle regression. The Annals of Statistics 32:407–499.MathSciNetCrossRefzbMATHGoogle Scholar
  9. Escobar, Luis A., and E. Barry Moser. 1993. A note on the updating of regression estimates. The American Statistician 47:192–194.MathSciNetGoogle Scholar
  10. Eskow, Elizabeth, and Robert B. Schnabel. 1991. Algorithm 695: Software for a new modified Cholesky factorization. ACM Transactions on Mathematical Software 17:306–312.CrossRefzbMATHGoogle Scholar
  11. Fuller, Wayne A. 1995. Introduction to Statistical Time Series, 2nd ed. New York: John Wiley and Sons.CrossRefGoogle Scholar
  12. Gentleman, W. M. 1974. Algorithm AS 75: Basic procedures for large, sparse or weighted linear least squares problems. Applied Statistics 23:448–454.CrossRefGoogle Scholar
  13. Gill, Len, and Arthur Lewbel. 1992. Testing the rank and definiteness of estimated matrices with applications to factor, state-space and ARMA models. Journal of the American Statistical Association 87:766–776.MathSciNetCrossRefzbMATHGoogle Scholar
  14. Golub, Gene H., and Charles F. Van Loan. 1996. Matrix Computations, 3rd ed. Baltimore: The Johns Hopkins Press.zbMATHGoogle Scholar
  15. Jolliffe, I. T. 2002. Principal Component Analysis, 2nd ed. New York: Springer-Verlag.zbMATHGoogle Scholar
  16. Kleibergen, Frank, and Richard Paap. 2006. Generalized reduced rank tests using the singular value decomposition. Journal of Econometrics 133:97–126.MathSciNetCrossRefzbMATHGoogle Scholar
  17. Loader, Catherine. 2012. Smoothing: Local regression techniques. In Handbook of Computational Statistics: Concepts and Methods, 2nd revised and updated ed., ed. James E. Gentle, Wolfgang Härdle, and Yuichi Mori, 571–596. Berlin: Springer.CrossRefGoogle Scholar
  18. Longley, James W. 1967. An appraisal of least squares problems for the electronic computer from the point of view of the user. Journal of the American Statistical Association 62:819–841.MathSciNetCrossRefGoogle Scholar
  19. Meyn, Sean, and Richard L. Tweedie. 2009. Markov Chains and Stochastic Stability, 2nd ed. Cambridge, United Kingdom: Cambridge University Press.CrossRefzbMATHGoogle Scholar
  20. Miller, Alan. 2002. Subset Selection in Regression, 2nd ed. Boca Raton: Chapman and Hall/CRC Press.CrossRefzbMATHGoogle Scholar
  21. Miller, Alan J., and Nam-Ky Nguyen. 1994. A Fedorov exchange algorithm for D-optimal design. Applied Statistics 43:669–678.CrossRefGoogle Scholar
  22. Mizuta, Masahiro. 2012. Dimension reduction methods. In Handbook of Computational Statistics: Concepts and Methods, 2nd revised and updated ed., ed. James E. Gentle, Wolfgang Härdle, and Yuichi Mori, 619–644. Berlin: Springer.CrossRefGoogle Scholar
  23. Mullet, Gary M., and Tracy W. Murray. 1971. A new method for examining rounding error in least-squares regression computer programs. Journal of the American Statistical Association 66:496–498.CrossRefzbMATHGoogle Scholar
  24. Nguyen, Nam-Ky, and Alan J. Miller. 1992. A review of some exchange algorithms for constructing D-optimal designs. Computational Statistics and Data Analysis 14:489–498.MathSciNetCrossRefzbMATHGoogle Scholar
  25. Robin, J. M., and R. J. Smith. 2000. Tests of rank. Econometric Theory 16:151–175.MathSciNetCrossRefzbMATHGoogle Scholar
  26. Rousseeuw, Peter J., and Geert Molenberghs. 1993. Transformation of nonpositive semidefinite correlation matrices. Communications in Statistics — Theory and Methods 22:965–984.CrossRefzbMATHGoogle Scholar
  27. Searle, S. R. 1971. Linear Models. New York: John Wiley and Sons.zbMATHGoogle Scholar
  28. Shao, Jun. 2003. Mathematical Statistics, 2nd ed. New York: Springer-Verlag.Google Scholar
  29. Titterington, D. M. 1975. Optimal design: Some geometrical aspects of D-optimality. Biometrika 62:313–320.MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • James E. Gentle
    • 1
  1. 1.FairfaxUSA

Personalised recommendations