# Principal Component Analysis

**DOI:**https://doi.org/10.1007/978-3-642-04898-2_455

## Introduction

Large or massive data sets are increasingly common and often include measurements on many variables. It is frequently possible to reduce the number of variables considerably while still retaining much of the information in the original data set. Principal component analysis (PCA) is probably the best known and most widely used dimension-reducing technique for doing this. Suppose we have *n* measurements on a vector **x** of *p* random variables, and we wish to reduce the dimension from *p* to *q*, where *q* is typically much smaller than *p*. PCA does this by finding linear combinations, **a**_{1}*′***x**, **a**_{2}*′***x**, *…*, **a**_{q}*′***x**, called *principal components*, that successively have maximum variance for the data, subject to being uncorrelated with previous **{ a}**_{k}*′***{ x}**s. Solving this maximization problem, we find that the vectors **a**_{1}, **a**_{2}, *…*, **a**_{q} are the eigenvectors of the covariance matrix, **S**, of the data, corresponding to the *q* largest eigenvalues (see Eigenvalue, Eigenvector and Eigenspace). The eigenvalues...

## References and Further Reading

- Hotelling H (1933) Analysis of a complex of statistical variables into principal components. J Educ Psychol 24:417–441, 498–520Google Scholar
- Jackson JE (1991) A user’s guide to principal components. Wiley, New YorkzbMATHGoogle Scholar
- Jolliffe IT (2002) Principal component analysis, 2nd edn. Springer, New YorkzbMATHGoogle Scholar
- Pearson K (1901) On lines and planes of closest fit to systems of points in space. Philos Mag 2:559–572Google Scholar
- Yule W, Berger M, Butler S, Newham V, Tizard J (1969) The WPPSI: an empirical evaluation with a British sample. Brit J Educ Psychol 39:1–13Google Scholar