D-trace estimation of a precision matrix using adaptive Lasso penalties

  • Vahe AvagyanEmail author
  • Andrés M. Alonso
  • Francisco J. Nogales
Regular Article


The accurate estimation of a precision matrix plays a crucial role in the current age of high-dimensional data explosion. To deal with this problem, one of the prominent and commonly used techniques is the \(\ell _1\) norm (Lasso) penalization for a given loss function. This approach guarantees the sparsity of the precision matrix estimate for properly selected penalty parameters. However, the \(\ell _1\) norm penalization often fails to control the bias of obtained estimator because of its overestimation behavior. In this paper, we introduce two adaptive extensions of the recently proposed \(\ell _1\) norm penalized D-trace loss minimization method. They aim at reducing the produced bias in the estimator. Extensive numerical results, using both simulated and real datasets, show the advantage of our proposed estimators.


Adaptive thresholding D-trace loss Gaussian graphical model Gene expression data High-dimensionality 

Mathematics Subject Classification

62H30 Classification and discrimination; cluster analysis 62J10 Analysis of variance and covariance 65S05 Graphical methods 



We would like to thank the Associate Editor, Coordinating Editor and two anonymous referees for their helpful comments that led to an improvement of this article. We express our gratitude to Teng Zhang and Hui Zou for sharing their Matlab code that solves the \(\ell _1\) norm penalized D-trace loss minimization problem. Andrés M. Alonso gratefully acknowledges financial support from CICYT (Spain) Grants ECO2012-38442 and ECO2015-66593. Francisco J. Nogales and Vahe Avagyan were supported by the Spanish Government through project MTM2013-44902-P. This paper is based on the first author’s dissertation submitted to the Universidad Carlos III de Madrid. At the time of publication, Vahe Avagyan is a Postdoctoral fellow at Ghent University.


  1. Anderson TW (2003) An introduction to multivariate statistical analysis. Wiley-Interscience, New YorkGoogle Scholar
  2. Banerjee O, El Ghaoui L, d’Aspremont A (2008) Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. J Mach Learn Res 9:485–516MathSciNetzbMATHGoogle Scholar
  3. Banerjee S, Ghosal S (2015) Bayesian structure learning in graphical models. J Multivar Anal 136:147–162MathSciNetCrossRefzbMATHGoogle Scholar
  4. Bickel PJ, Levina E (2008) Regularized estimation of large covariance matrices. Ann Stat 36(1):199–227MathSciNetCrossRefzbMATHGoogle Scholar
  5. Cai T, Liu W, Luo X (2011) A constrained \({\ell _1}\) minimization approach to sparse precision matrix estimation. J Am Stat Assoc 106(494):594–607CrossRefzbMATHGoogle Scholar
  6. Cai T, Yuan M (2012) Adaptive covariance matrix estimation through block thresholding. Ann Stat 40(4):2014–2042MathSciNetCrossRefzbMATHGoogle Scholar
  7. Cui Y, Leng C, Sun D (2016) Sparse estimation of high-dimensional correlation matrices. Comput Stat Data Anal 93:390–403MathSciNetCrossRefGoogle Scholar
  8. d’Aspremont A, Banerjee O, Ghaoui L (2008) First-order methods for sparse covariance selection. SIAM J Matrix Anal Appl 30:56–66MathSciNetCrossRefzbMATHGoogle Scholar
  9. Dempster A (1972) Covariance selection. Biometrics 28(1):157–175CrossRefGoogle Scholar
  10. Deng X, Tsui K (2013) Penalized covariance matrix estimation using a matrix-logarithm transformation. J Comput Graph Stat 22(2):494–512MathSciNetCrossRefGoogle Scholar
  11. Duchi J, Gould S, Koller D (2008) Projected subgradient methods for learning sparse Gaussians. In: Proceeding of the 24th conference on uncertainty in artificial intelligence, pp 153–160. arXiv:1206.3249
  12. El Karoui N (2008) Operator norm consistent estimation of large-dimensional sparse covariance matrices. Ann Appl Stat 36(6):2717–2756MathSciNetCrossRefzbMATHGoogle Scholar
  13. Fan J, Feng J, Wu Y (2009) Network exploration via the adaptive Lasso and SCAD penalties. Ann Appl Stat 3(2):521–541MathSciNetCrossRefzbMATHGoogle Scholar
  14. Fan J, Li R (2001) Variable selection via nonconcave penalized likelihood and its oracle properties. J Am Stat Assoc 96:1348–1360MathSciNetCrossRefzbMATHGoogle Scholar
  15. Frahm G, Memmel C (2010) Dominating estimator for minimum-variance portfolios. J Econom 159:289–302MathSciNetCrossRefzbMATHGoogle Scholar
  16. Friedman J, Hastie T, Tibshirani R (2008) Sparse inverse covariance estimation with the graphical Lasso. Biostatistics 9(3):432–441CrossRefzbMATHGoogle Scholar
  17. Goto S, Xu Y (2015) Improving mean variance optimization through sparse hedging restrictions. J Finan Quant Anal 50(06):1415–1441CrossRefGoogle Scholar
  18. Haff LR (1980) Estimation of the inverse covariance matrix: random mixtures of the inverse Wishart matrix and the identity. Ann Stat 8(3):586–597MathSciNetCrossRefzbMATHGoogle Scholar
  19. Hsieh C-J, Dhillon IS, Ravikumar PK, Sustik MA (2011) Sparse inverse covariance matrix estimation using quadratic approximation. In: Advances in neural information processing systems, vol 24, pp 2330–2338Google Scholar
  20. Huang S, Li J, Sun L, Ye J, Fleisher A, Wu T, Chen K, Reiman E (2010) Learning brain connectivity of Alzheimer’s disease by sparse inverse covariance estimation. NeuroImage 50:935–949CrossRefGoogle Scholar
  21. Johnstone IM (2001) On the distribution of the largest eigenvalue in principal component analysis. Ann Stat 29(3):295–327MathSciNetCrossRefzbMATHGoogle Scholar
  22. Jorissen RN, Lipton L, Gibbs P, Chapman M, Desai J, Jones IT, Yeatman TJ, East P, Tomlinson IP, Verspaget HW, Aaltonen LA, Kruhøffer M, Orntoft TF, Andersen CL, Sieber OM (2008) DNA copy-number alterations underlie gene expression differences between microsatellite stable and unstable colorectal cancers. Clin Cancer Res 14(24):8061–8069CrossRefGoogle Scholar
  23. Kourtis A, Dotsis G, Markellos N (2012) Parameter uncertainty in portfolio selection: shrinking the inverse covariance matrix. J Bank Finan 36:2522–2531CrossRefGoogle Scholar
  24. Kuerer HM, Newman LA, Smith TL, Ames FC, Hunt KK, Dhingra K, Theriault RL, Singh G, Binkley SM, Sneige N, Buchholz TA, Ross MI, McNeese MD, Buzdar AU, Hortobagyi GN, Singletary SE (1999) Clinical course of breast cancer patients with complete pathologic primary tumor and axillary lymph node response to doxorubicin-based neoadjuvant chemotherapy. J Clin Oncol 17(2):460–469CrossRefGoogle Scholar
  25. Lam C, Fan J (2009) Sparsistency and rates of convergence in large covariance matrix estimation. Ann Stat 37(6B):4254MathSciNetCrossRefzbMATHGoogle Scholar
  26. Lauritzen S (1996) Graphical models. Clarendon Press, OxfordzbMATHGoogle Scholar
  27. Ledoit O, Wolf M (2004) A well-conditioned estimator for large-dimensional covariance matrices. J Multivar Anal 88:365–411MathSciNetCrossRefzbMATHGoogle Scholar
  28. Ledoit O, Wolf M (2012) Nonlinear shrinkage estimation of large-dimensional covariance matrices. Ann Stat 40(2):1024–1060MathSciNetCrossRefzbMATHGoogle Scholar
  29. Mardia KV, Kent JT, Bibby JM (1979) Multivariate analysis. Academic Press, New YorkzbMATHGoogle Scholar
  30. Matthews BW (1975) Comparison of the predicted and observed secondary structure of T4 phage lysozyme. Biochim Biophys Acta 405:442–451CrossRefGoogle Scholar
  31. Maurya A (2014) A joint convex penalty for inverse covariance matrix estimation. Comput Stat Data Anal 75:15–27MathSciNetCrossRefGoogle Scholar
  32. McLachlan S (2004) Discriminant analysis and statistical pattern recognition. Wiley, New JerseyGoogle Scholar
  33. Meinshausen N (2007) Relaxed Lasso. Comput Stat Data Anal 52(1):374–393MathSciNetCrossRefzbMATHGoogle Scholar
  34. Meinshausen N, Bühlmann P (2006) High-dimensional graphs and variable selection with the Lasso. Ann Stat 34(2):1436–1462MathSciNetCrossRefzbMATHGoogle Scholar
  35. Nguyen TD, Welsch RE (2010) Outlier detection and robust covariance estimation using mathematical programming. Adv Data Anal Classif 4(4):301–334MathSciNetCrossRefzbMATHGoogle Scholar
  36. Ravikumar P, Wainwright M, Raskutti G, Yu B (2011) High-dimensional covariance estimation by minimizing \(\ell _1\)-penalized log-determinant divergence. Electr J Stat 5:935–980CrossRefzbMATHGoogle Scholar
  37. Rothman A, Bickel P, Levina E (2009) Generalized thresholding of large covariance matrices. J Am Stat Assoc 104(485):177–186MathSciNetCrossRefzbMATHGoogle Scholar
  38. Rothman A, Bickel P, Levina E, Zhu J (2008) Sparse permutation invariant covariance estimation. Electr J Stat 2:494–515MathSciNetCrossRefzbMATHGoogle Scholar
  39. Rothman AJ (2012) Positive definite estimators of large covariance matrices. Biometrika 99(2):733–740MathSciNetCrossRefzbMATHGoogle Scholar
  40. Ryali S, Chen T, Supekar K, Menon V (2012) Estimation of functional connectivity in fMRI data using stability selection-based sparse partial correlation with elastic net penalty. NeuroImage 59(4):3852–3861CrossRefGoogle Scholar
  41. Schafer J, Strimmer K (2005) A shrinkage approach to large-scale covariance matrix estimation and implications for functional genomics. Stat Appl Genet Mol Biol 4(1):Article 32Google Scholar
  42. Scheinberg K, Ma S, Goldfarb D (2010) Sparse inverse covariance selection via alternating linearization methods. In: Advances in neural information processing systems, vol 23, pp 2101–2109Google Scholar
  43. Shi L, Reid LH, Jones WD, Shippy R, Warrington JA, Baker SC, Collins PJ, deLongueville F, Kawasaki ES, Lee KY, Luo Y, Sun YA, Willey JC, Setterquist RA, Fischer GM, Tong W, Dragan YP, Dix DJ, Frueh FW, Goodsaid FM, Herman D, Jensen RV, Johnson CD, Lobenhofer EK, Puri RK, Scherf U, Thierry-Mieg J, Wang C, Wilson M, Wolber PK (2010) The microarray quality control (MAQC)-II study of common practices for the development and validation of microarray-based predictive models. Nat Biotechnol 28(8):827–838CrossRefGoogle Scholar
  44. Stifanelli PF, Creanza TM, Anglani R, Liuzzi VC, Mukherjee S, Schena FP, Ancona N (2013) A comparative study of covariance selection models for the inference of gene regulatory networks. J Biomed Inf 46:894–904CrossRefGoogle Scholar
  45. Tibshirani R (1996) Regression shrinkage and selection via the Lasso. J R Stat Soc 58(1):267–288MathSciNetzbMATHGoogle Scholar
  46. Touloumis A (2015) Nonparametric Stein-type shrnikage covariance matrix estimators in high-dimensional settings. Comput Stat Data Anal 83:251–261MathSciNetCrossRefGoogle Scholar
  47. van de Geer S, Buhlmann P, Zhou S (2010) The adaptive and the thresholded Lasso for potentially misspecified models. arXiv preprint arXiv:1001.5176
  48. Wang Y, Daniels MJ (2014) Computationally efficient banding of large covariance matrices for ordered data and connections to banding the inverse Cholesky factor. J Multivar Anal 130:21–26MathSciNetCrossRefzbMATHGoogle Scholar
  49. Warton DI (2008) Penalized normal likelihood and ridge regularization of correlation and covariance matrices. J Am Stat Assoc 103(481):340–349MathSciNetCrossRefzbMATHGoogle Scholar
  50. Whittaker J (1990) Graphical models in applied multivariate statistics. Wiley, ChichesterzbMATHGoogle Scholar
  51. Witten DM, Friedman JH, Simon N (2011) New insights and faster computations for the graphical Lasso. J Comput Graph Stat 20(4):892–900MathSciNetCrossRefGoogle Scholar
  52. Xue L, Ma S, Zou H (2012) Positive-definite \(\ell _1\)-penalized estimation of large covariance matrices. J Am Stat Assoc 107(500):1480–1491CrossRefzbMATHGoogle Scholar
  53. Yin J, Li J (2013) Adjusting for high-dimensional covariates in sparse precision matrix estimation by \(\ell _1\)-penalization. J Multivar Anal 116:365–381CrossRefzbMATHGoogle Scholar
  54. Yuan M (2010) High dimensional inverse covariance matrix estimation via linear programming. J Mach Learn Res 11:2261–2286MathSciNetzbMATHGoogle Scholar
  55. Yuan M, Lin Y (2007) Model selection and estimation in the Gaussian graphical model. Biometrika 94(1):19–35MathSciNetCrossRefzbMATHGoogle Scholar
  56. Zerenner T, Friederichs P, Lehnertz K, Hense A (2014) A Gaussian graphical model approach to climate networks. Chaos: an interdisciplinary. J Nonlinear Sci 24(2):023103zbMATHGoogle Scholar
  57. Zhang C-H, Huang J (2008) The sparsity and bias of the Lasso selection in high-dimensional linear regression. Ann Stat 36(4):1567–1594Google Scholar
  58. Zhang T, Zou H (2014) Sparse precision matrix estimation via Lasso penalized D-trace loss. Biometrika 88:1–18MathSciNetzbMATHGoogle Scholar
  59. Zou H (2006) The adaptive Lasso and its oracle properties. J Am Stat Assoc 101(476):1418–1429MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2016

Authors and Affiliations

  • Vahe Avagyan
    • 1
    • 2
    Email author
  • Andrés M. Alonso
    • 2
  • Francisco J. Nogales
    • 3
  1. 1.Department of Applied Mathematics, Computer Science and StatisticsGhent UniversityGhentBelgium
  2. 2.Department of StatisticsUniversidad Carlos III de MadridGetafeSpain
  3. 3.Department of StatisticsUniversidad Carlos III de MadridLeganesSpain

Personalised recommendations