Abstract
Multiple regression finds a fitted value for a criterion from a linear combination of the predictors. But suppose we have a collection of variables without a criterion. This state of affairs characterizes our design matrix, X, as none of the predictors is a criterion. Is there a way to create a linear combination of these variables? There is, but it’s not as simple as predicting to a criterion. To understand how it’s done, we take up the study of matrix decompositions. There are many varieties, but all decompose a matrix into two or more smaller matrices. Their value is twofold: they highlight variables that share common variance, and they offer computationally efficient ways of solving linear equations and performing least squares estimation.
Keywords
Triangular Matrix Product Vector Original Matrix Cholesky Decomposition Cholesky FactorSupplementary material
References
- Golub, C. H., & Van Loan, C. F. (2013). Matrix computations (4th ed.). Baltimore: Johns-Hopkins University Press.MATHGoogle Scholar
- Seber, G. A. F., & Lee, A. J. (2003). Linear regression analysis (2nd ed). New York: Wiley.CrossRefMATHGoogle Scholar
- Stewart, G. W. (2001). Matrix algorithms (Eigensystems, Vol. 2). Philadelphia: SIAM.CrossRefMATHGoogle Scholar
- Strang, G. (2009). Linear algebra and its applications (4th ed.). Cambridge, MA: Wellesley-Cambridge Press.Google Scholar
- Trefethen, L. N., & Schreiber, R. S. (1990). Average-case stability of Gaussian elimination. Society for Industrial and Applied Mathematics, 11, 335–360.MATHMathSciNetGoogle Scholar
- Watkins, D. S. (2010). Fundamentals of matrix computations (3rd ed.). New York: Wiley.MATHGoogle Scholar