Advanced Computation of Sparse Precision Matrices for Big Data

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10235)


The precision matrix is the inverse of the covariance matrix. Estimating large sparse precision matrices is an interesting and a challenging problem in many fields of sciences, engineering, humanities and machine learning problems in general. Recent applications often encounter high dimensionality with a limited number of data points leading to a number of covariance parameters that greatly exceeds the number of observations, and hence the singularity of the covariance matrix. Several methods have been proposed to deal with this challenging problem, but there is no guarantee that the obtained estimator is positive definite. Furthermore, in many cases, one needs to capture some additional information on the setting of the problem. In this paper, we introduce a criterion that ensures the positive definiteness of the precision matrix and we propose the inner-outer alternating direction method of multipliers as an efficient method for estimating it. We show that the convergence of the algorithm is ensured with a sufficiently relaxed stopping criterion in the inner iteration. We also show that the proposed method converges, is robust, accurate and scalable as it lends itself to an efficient implementation on parallel computers.


Array Comparative Genomic Hybridization Positive Definiteness Sample Covariance Matrix Precision Matrix Coordinate Descent Method 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Anbari, M.E., Alam, S., Bensmail, H.: COFADMM: a computational features selection with alternating direction method of multipliers. Procedia Comput. Sci. 29, 821–830 (2014)CrossRefGoogle Scholar
  2. 2.
    Baggag, A., Sameh, A.: A nested iterative scheme for indefinite linear systems in particulate flows. Comput. Methods Appl. Mech. Eng. 193(21–22), 1923–1957 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Bien, J., Tibshirani, R.: Sparse estimation of a covariance matrix. Biometrika 98, 807–820 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Bredel, M., Bredel, C., Juric, D., Harsh, G.R., Vogel, H., Recht, L.D., Sikic, B.I.: High-resolution genome-wide mapping of genetic alterations in human glial brain tumors. Cancer Res. 65(10), 4088–4096 (2005)CrossRefGoogle Scholar
  5. 5.
    Cai, T., Zhou, H.: A constrained \(l_{1}\) minimization approach to sparse precision matrix estimation. J. Am. Stat. Assoc. 106, 594–607 (2011)CrossRefGoogle Scholar
  6. 6.
    Donoho, D.L.: De-noising by soft-thresholding. IEEE Trans. Inf. Theory 41(3), 613–627 (1995)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Duchi, J., Shalev-Shwartz, S., Singer, Y., Chandra, T.: Efficient projections onto the l1-ball for learning in high dimensions. In: Proceedings of the 25th International Conference on Machine Learning, pp. 272–279. ACM (2008)Google Scholar
  8. 8.
    Honorio, J., Jaakkola, T.S: Inverse covariance estimation for high-dimensional data in linear time, space: spectral methods for riccati and sparse models. arXiv preprint arXiv:1309.6838 (2013)
  9. 9.
    Higham, N.J.: Computing a nearest symmetric positive semidefinite matrix. Linear Algebra Appl. 103, 103–118 (1988)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Lai, W.R., Johnson, M.D., Kucherlapati, R., Park, P.J.: Comparative analysis of algorithms for identifying amplifications and deletions in array CGH data. Bioinform. 21(19), 3763–3770 (2005)CrossRefGoogle Scholar
  11. 11.
    Maurya, A.: A well-conditioned and sparse estimation of covariance and inverse covariance matrices using a joint penalty. J. Mach. Learn. Res. 17, 1–28 (2016)MathSciNetzbMATHGoogle Scholar
  12. 12.
    Mazumder, R., Hastie, T.: Exact covariance thresholding into connected components for large-scale graphical lasso. J. Mach. Learn. Res. 13(Mar), 781–794 (2012)MathSciNetzbMATHGoogle Scholar
  13. 13.
    Ravikumar, P., Wainwright, M., Raskutti, G., Yu, B.: High-dimensional covariance estimation by minimizing \(l_{1}\)-penalized log-determinant divergence. Electron. J. Statist. 5, 935–980 (2011)CrossRefzbMATHGoogle Scholar
  14. 14.
    Rothman, A.J., Bickel, P.J., Levina, E., Zhu, J., et al.: Sparse permutation invariant covariance estimation. Electron. J. Stat. 2, 494–515 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    Schwertman, N.C., Allen, D.M.: Smoothing an indefinite variance-covariance matrix. J. Stat. Comput. Simul. 9(3), 183–194 (1979)CrossRefGoogle Scholar
  16. 16.
    Witten, D.M., Tibshirani, R.: Covariance-regularized regression and classification for high dimensional problems. J. R. Stat. Soc.: Ser. B (Stat. Methodol.) 71(3), 615–636 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Xue, L., Ma, S., Zou, H.: Positive-definite 1-penalized estimation of large covariance matrices. J. Am. Stat. Assoc. 107(500), 1480–1491 (2012)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Qatar Computing Research InstituteHamad Bin Khalifa UniversityDohaQatar

Personalised recommendations