Skip to main content
Log in

On how to solve large-scale log-determinant optimization problems

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

We propose a proximal augmented Lagrangian method and a hybrid method, i.e., employing the proximal augmented Lagrangian method to generate a good initial point and then employing the Newton-CG augmented Lagrangian method to get a highly accurate solution, to solve large-scale nonlinear semidefinite programming problems whose objective functions are a sum of a convex quadratic function and a log-determinant term. We demonstrate that the algorithms can supply a high quality solution efficiently even for some ill-conditioned problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1

Similar content being viewed by others

Notes

  1. Allowing for in this paper, we only focus on large-scale problems, but we did not find out large-scale real data for general problems with \(\mathcal{Q}\ne 0\), so we only compute real data problems with \(\mathcal{Q}\equiv 0\).

References

  1. Alizadeh, F., Haeberly, J.P.A., Overton, O.L.: Complementarity and nondegeneracy in semidefinite programming. Math. Program. 77, 111–128 (1997)

    MathSciNet  MATH  Google Scholar 

  2. Bonnans, J.F., Shapiro, A.: Perturbation Analysis of Optimization Problems. Springer, New York (2000)

    Book  MATH  Google Scholar 

  3. Dahl, J., Vandenberghe, L., Roychowdhury, V.: Covariance selection for non-chordal graphs via chordal embedding. Optim. Methods Softw. 23, 501–520 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  4. Dempster, A.: Covariance selection. Biometrics 28, 157–175 (1972)

    Article  Google Scholar 

  5. d’Aspremont, A., Banerjee, O., El Ghaoui, L.: First-order methods for sparse covariance selection. SIAM J. Matrix Anal. Appl. 30, 56–66 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  6. Fazel, M., Pong, T.-K., Sun, D., Tseng, P.: Hankel matrix rank minimization with applications to system identification and realization. SIAM J. Matrix Anal. Appl. 34, 946–977 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  7. Freund, R.W., Nachtigal, N.M.: A new Krylov subspace method for symmetric indefinite linear systems, ORNL/TM-12754, (1994)

  8. Gao, Y., Sun, D.: Calibrating least squares semidefinite programming with equality and inequality constraints. SIAM J. Matrix Anal. Appl. 31, 1432–1457 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  9. Horn, R.A., Johnson, C.R.: Matrix Analysis. Cambridge University Press, Cambridge (1985)

    Book  MATH  Google Scholar 

  10. Horn, R.A., Johnson, C.R.: Topics in Matrix Analysis. Cambridge University Press, Cambridge (1991)

    Book  MATH  Google Scholar 

  11. Hughes, T.R., Marton, M.J., Jones, A.R., Roberts, C.J., Stoughton, R., Armour, C.D., Bennett, H.A., Coffey, E., Dai, H., He, Y.D., Kidd, M.J., King, A.M., Meyer, M.R., Slade, D., Lum, P.Y., Stepaniants, S.B., Shoemaker, D.D., Gachotte, D., Chakraburtty, K., Simon, J., Bard, M., Friend, S.H.: Functional discovery via a compendium of expression profiles. Cell 102, 109–126 (2000)

    Article  Google Scholar 

  12. Hu, Z., Cao, J., Hong, L.J.: Robust simulation of global warming policies using the DICE model. Manag. Sci. 58, 1–17 (2012)

    Article  Google Scholar 

  13. Jiang, K.F., Sun, D.F., Toh, K.-C.: An inexact accelerated proximal gradient method for large scale linearly constrained convex SDP. SIAM J. Optim. 22, 1042–1064 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  14. Li, L., Toh, K.-C.: An inexact interior point method for L1-regularized sparse covariance selection. Math. Program. Comput. 2, 291–315 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  15. Lu, Z.: Smooth optimization approach for sparse covariance selection. SIAM J. Optim. 19, 1807–1827 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  16. Lu, Z.: Adaptive first-order methods for general sparse inverse covariance selection. SIAM J. Matrix Anal. Appl. 31, 2000–2016 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  17. Lu, Z., Zhang, Y.: Penalty decomposition methods for \(L0\)-norm minimization. In: Proceedings of Neural Information Processing Systems (NIPS), pp. 46–54 (2011)

  18. Martinet, B.: Regularisation d’inéquations variationelles par approximations successives. Rev. Française d’Informat. Recherche Opérationnelle, 154–159, (1970)

  19. Meng, F., Sun, D., Zhao, G.: Semismoothness of solutions to generalized equations and the Moreau-Yosida regularization. Math. Program. 104, 561–581 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  20. Minty, G.J.: On the monotonicity of the gradient of a convex function. Pac. J. Math. 14, 243–247 (1964)

    Article  MathSciNet  MATH  Google Scholar 

  21. Moreau, J.J.: Proximité et dualité dans un espace Hilbertien. Bull. Soc. Math. France 93, 273–299 (1965)

    MathSciNet  MATH  Google Scholar 

  22. Natsoulis, G., Pearson, C.I., Gollub, J., Eynon, B.P., Ferng, J., Nair, R., Idury, R., Lee, M.D., Fielden, M.R., Brennan, R.J., Roter, A.H., Jarnagin, K.: The liver pharmacological and xenobiotic gene response repertoire. Mol. Syst. Biol. 175, 1–12 (2008)

    Google Scholar 

  23. Olsen, P., Oztoprak, F., Nocedal, J., Rennie, S.: Newton-like methods for sparse inverse covariance estimation. http://www.optimization-online.org/DB_HTML/2012/06/3506.html

  24. Qi, H., Sun, D.: A quadratically convergent Newton method for computing the nearest correlation matrix. SIAM J. Matrix Anal. Appl. 28, 360–385 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  25. Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)

    Book  MATH  Google Scholar 

  26. Rockafellar, R.T.: A dual approach to solving nonlinear programming problems by unconstrained optimization. Math. Program. 5, 354–373 (1973)

    Article  MathSciNet  MATH  Google Scholar 

  27. Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877–898 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  28. Rockafellar, R.T.: Augmented Lagrangains and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1, 97–116 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  29. Scheinberg, K., Ma, S., Goldfarb, D.: Sparse inverse covariance selection via alternating linearization methods. In: Twenty-Fourth Annual Conference on Neural Information Processing Systems (NIPS), pp. 2101–2109 (2010)

  30. Scheinberg, K., Rish, I.: Learning sparse Gaussian Markov networks using a greedy coordinate ascent approach. In: Balcazar, J.L., Bonchi, F., Gionis, A., Sebag, M. (eds.) Machine Learning and Knowledge Discovery in Databases. Lecture Notes in Computer Science 6323, pp. 196–212. Springer, Berlin (2010)

    Chapter  Google Scholar 

  31. Sun, D.: The strong second order sufficient condition and constraint nondegeneracy in nonlinear semidefinite programming and their implications. Math. Oper. Res. 31, 761–776 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  32. Toh, K.-C.: Primal-dual path-following algorithms for determinant maximization problems with linear matrix inequalities. Comput. Optim. Appl. 14, 309–330 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  33. Toh, K.-C.: An inexact primal-dual path following algorithm for convex quadratic SDP. Math. Program. 112, 221–254 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  34. Tütüncü, R.H., Toh, K.-C., Todd, M.J.: Solving semidefinite-quadratic-linear programs using SDPT3. Math. Program. 95, 189–217 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  35. Toh, K.-C., Tütüncü, R.H., Todd, M.J.: Inexact primal-dual path-following algorithms for a special class of convex quadratic SDP and related problems. Pac. J. Optim. 3, 135–164 (2007)

    MathSciNet  MATH  Google Scholar 

  36. Varadarajan, B., Povey, D., Chu, S.M.: Quick fmllr for speaker adaptation in speech recognition. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2008)

  37. Wang, C., Sun, D., Toh, K.-C.: Solving log-determinant optimization problems by a Newton-CG primal proximal point algorithm. SIAM J. Optim. 20, 2994–3013 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  38. Yang, J., Sun, D., Toh, K.-C.: A proximal point algorithm for log-determinant optimization with group Lasso regularization. SIAM J. Optim. 23, 857–893 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  39. Yang, S., Shen, X., Wonka, P., Lu, Z., Ye, J.: Fused multiple graphical Lasso. http://people.math.sfu.ca/~zhaosong/ResearchPapers/FMGL

  40. Yuan, X.: Alternating direction methods for sparse covariance selection. J. Sci. Comput. 51, 261–273 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  41. Zhao, X.-Y.: A Semismooth Newton-CG augmented Lagrangian method for large scale linear and convex quadratic SDPs. PhD thesis, National University of Singapore (2009)

  42. Zhao, X.-Y., Sun, D., Toh, K.-C.: A Newton-CG augmented Lagrangian method for semidefinite programming. SIAM J. Optim. 20, 1737–1765 (2010)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

I sincerely appreciate the Institute for Mathematical Sciences, National University of Singapore for supporting me to visit the institute and attend the workshop “Optimization: Computation, Theory and Modeling” in 2012 so that I can have a good opportunity to have fruitful discussions with Professors Defeng Sun and Kim-Chuan Toh. I appreciate Dr. Xinyuan Zhao in Beijing University of Technology for many discussions on this topic. I also appreciate the two anonymous referees and the editor for their helpful comments and suggestions, which improved the quality of this paper. The author’s research was supported by the National Natural Science Foundation of China under Grant 11201382, the Youth Fund of Humanities and Social Sciences of the Ministry of Education under Grant 12YJC910008, the project of the science and technology department of Sichuan province under Grant 2012ZR0154, and the Fundamental Research Funds for the Central Universities under Grants SWJTU12CX055 and SWJTU12ZT15.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chengjing Wang.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, C. On how to solve large-scale log-determinant optimization problems. Comput Optim Appl 64, 489–511 (2016). https://doi.org/10.1007/s10589-015-9812-y

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-015-9812-y

Keywords

Navigation