Abstract
Results in time series analysis literature state that through the Cholesky decomposition, covariance estimates can be stated as a sequence of regressions. Furthermore, these results imply that the inverse of the covariance matrix can be estimated directly. This leads to a novel approach for approximating covariance matrices in high dimensional classification problems based on the Cholesky decomposition. By assuming that some of the targets in these regressions can be set to zero, simpler estimates for class-wise covariance matrices can be found. By reducing the number of parameters to estimate in the classifier, good generalization performance is obtained. Experiments on three different feature sets from a dataset of images of handwritten numerals show that simplified covariance estimates from the proposed method is competitive with results from conventional classifiers such as support vector machines.
Chapter PDF
Similar content being viewed by others
Keywords
- Covariance Matrice
- Covariance Estimate
- Zernike Moment
- Cholesky Decomposition
- Good Generalization Performance
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Dempster, A.: Covariance selection. Biometrics (1), 157–175 (1972)
Golub, G.H., Van Loan, C.F.: Matrix computations, 3rd edn. John Hopkins University Press (1996)
Smith, M., Kohn, R.: Parsimonius covariance matrix estimation for longitudinal data. Journal of the American Statistical Association 97(460), 1141–1153 (2002)
Bilmes, J.A.: Factored sparse inverse covariance matrices. In: Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2000), vol. 2, pp. 1009–1012 (2000)
Pouhramadi, M.: Foundations of Time Series Analysis and Prediction Theory. Wiley, Chichester (2001)
Whittaker, J.: Graphical models in applied multivariate statistics. Wiley, Chichester (1990)
Hastie, T., Tibshirani, R., Buja, A.: Flexible discriminant analysis and mixture models. Journal of the American Statistical Association 89(428), 1255–1270 (1994)
Jain, A.K., Duin, R.P., Mao, J.: Statistical pattern recognition: A review. IEEE Trans. Pattern Anal. Machine Intell. 22(1), 4–37 (2000)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Berge, A., Solberg, A.S. (2006). Sparse Covariance Estimates for High Dimensional Classification Using the Cholesky Decomposition. In: Yeung, DY., Kwok, J.T., Fred, A., Roli, F., de Ridder, D. (eds) Structural, Syntactic, and Statistical Pattern Recognition. SSPR /SPR 2006. Lecture Notes in Computer Science, vol 4109. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11815921_92
Download citation
DOI: https://doi.org/10.1007/11815921_92
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-37236-3
Online ISBN: 978-3-540-37241-7
eBook Packages: Computer ScienceComputer Science (R0)