Sparse Covariance Estimates for High Dimensional Classification Using the Cholesky Decomposition
Results in time series analysis literature state that through the Cholesky decomposition, covariance estimates can be stated as a sequence of regressions. Furthermore, these results imply that the inverse of the covariance matrix can be estimated directly. This leads to a novel approach for approximating covariance matrices in high dimensional classification problems based on the Cholesky decomposition. By assuming that some of the targets in these regressions can be set to zero, simpler estimates for class-wise covariance matrices can be found. By reducing the number of parameters to estimate in the classifier, good generalization performance is obtained. Experiments on three different feature sets from a dataset of images of handwritten numerals show that simplified covariance estimates from the proposed method is competitive with results from conventional classifiers such as support vector machines.
KeywordsCovariance Matrice Covariance Estimate Zernike Moment Cholesky Decomposition Good Generalization Performance
Unable to display preview. Download preview PDF.
- 2.Golub, G.H., Van Loan, C.F.: Matrix computations, 3rd edn. John Hopkins University Press (1996)Google Scholar
- 4.Bilmes, J.A.: Factored sparse inverse covariance matrices. In: Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2000), vol. 2, pp. 1009–1012 (2000)Google Scholar
- 5.Pouhramadi, M.: Foundations of Time Series Analysis and Prediction Theory. Wiley, Chichester (2001)Google Scholar