Abstract
We propose a proximal augmented Lagrangian method and a hybrid method, i.e., employing the proximal augmented Lagrangian method to generate a good initial point and then employing the Newton-CG augmented Lagrangian method to get a highly accurate solution, to solve large-scale nonlinear semidefinite programming problems whose objective functions are a sum of a convex quadratic function and a log-determinant term. We demonstrate that the algorithms can supply a high quality solution efficiently even for some ill-conditioned problems.
Similar content being viewed by others
Notes
Allowing for in this paper, we only focus on large-scale problems, but we did not find out large-scale real data for general problems with \(\mathcal{Q}\ne 0\), so we only compute real data problems with \(\mathcal{Q}\equiv 0\).
References
Alizadeh, F., Haeberly, J.P.A., Overton, O.L.: Complementarity and nondegeneracy in semidefinite programming. Math. Program. 77, 111–128 (1997)
Bonnans, J.F., Shapiro, A.: Perturbation Analysis of Optimization Problems. Springer, New York (2000)
Dahl, J., Vandenberghe, L., Roychowdhury, V.: Covariance selection for non-chordal graphs via chordal embedding. Optim. Methods Softw. 23, 501–520 (2008)
Dempster, A.: Covariance selection. Biometrics 28, 157–175 (1972)
d’Aspremont, A., Banerjee, O., El Ghaoui, L.: First-order methods for sparse covariance selection. SIAM J. Matrix Anal. Appl. 30, 56–66 (2008)
Fazel, M., Pong, T.-K., Sun, D., Tseng, P.: Hankel matrix rank minimization with applications to system identification and realization. SIAM J. Matrix Anal. Appl. 34, 946–977 (2013)
Freund, R.W., Nachtigal, N.M.: A new Krylov subspace method for symmetric indefinite linear systems, ORNL/TM-12754, (1994)
Gao, Y., Sun, D.: Calibrating least squares semidefinite programming with equality and inequality constraints. SIAM J. Matrix Anal. Appl. 31, 1432–1457 (2009)
Horn, R.A., Johnson, C.R.: Matrix Analysis. Cambridge University Press, Cambridge (1985)
Horn, R.A., Johnson, C.R.: Topics in Matrix Analysis. Cambridge University Press, Cambridge (1991)
Hughes, T.R., Marton, M.J., Jones, A.R., Roberts, C.J., Stoughton, R., Armour, C.D., Bennett, H.A., Coffey, E., Dai, H., He, Y.D., Kidd, M.J., King, A.M., Meyer, M.R., Slade, D., Lum, P.Y., Stepaniants, S.B., Shoemaker, D.D., Gachotte, D., Chakraburtty, K., Simon, J., Bard, M., Friend, S.H.: Functional discovery via a compendium of expression profiles. Cell 102, 109–126 (2000)
Hu, Z., Cao, J., Hong, L.J.: Robust simulation of global warming policies using the DICE model. Manag. Sci. 58, 1–17 (2012)
Jiang, K.F., Sun, D.F., Toh, K.-C.: An inexact accelerated proximal gradient method for large scale linearly constrained convex SDP. SIAM J. Optim. 22, 1042–1064 (2012)
Li, L., Toh, K.-C.: An inexact interior point method for L1-regularized sparse covariance selection. Math. Program. Comput. 2, 291–315 (2010)
Lu, Z.: Smooth optimization approach for sparse covariance selection. SIAM J. Optim. 19, 1807–1827 (2009)
Lu, Z.: Adaptive first-order methods for general sparse inverse covariance selection. SIAM J. Matrix Anal. Appl. 31, 2000–2016 (2010)
Lu, Z., Zhang, Y.: Penalty decomposition methods for \(L0\)-norm minimization. In: Proceedings of Neural Information Processing Systems (NIPS), pp. 46–54 (2011)
Martinet, B.: Regularisation d’inéquations variationelles par approximations successives. Rev. Française d’Informat. Recherche Opérationnelle, 154–159, (1970)
Meng, F., Sun, D., Zhao, G.: Semismoothness of solutions to generalized equations and the Moreau-Yosida regularization. Math. Program. 104, 561–581 (2005)
Minty, G.J.: On the monotonicity of the gradient of a convex function. Pac. J. Math. 14, 243–247 (1964)
Moreau, J.J.: Proximité et dualité dans un espace Hilbertien. Bull. Soc. Math. France 93, 273–299 (1965)
Natsoulis, G., Pearson, C.I., Gollub, J., Eynon, B.P., Ferng, J., Nair, R., Idury, R., Lee, M.D., Fielden, M.R., Brennan, R.J., Roter, A.H., Jarnagin, K.: The liver pharmacological and xenobiotic gene response repertoire. Mol. Syst. Biol. 175, 1–12 (2008)
Olsen, P., Oztoprak, F., Nocedal, J., Rennie, S.: Newton-like methods for sparse inverse covariance estimation. http://www.optimization-online.org/DB_HTML/2012/06/3506.html
Qi, H., Sun, D.: A quadratically convergent Newton method for computing the nearest correlation matrix. SIAM J. Matrix Anal. Appl. 28, 360–385 (2006)
Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)
Rockafellar, R.T.: A dual approach to solving nonlinear programming problems by unconstrained optimization. Math. Program. 5, 354–373 (1973)
Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877–898 (1976)
Rockafellar, R.T.: Augmented Lagrangains and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1, 97–116 (1976)
Scheinberg, K., Ma, S., Goldfarb, D.: Sparse inverse covariance selection via alternating linearization methods. In: Twenty-Fourth Annual Conference on Neural Information Processing Systems (NIPS), pp. 2101–2109 (2010)
Scheinberg, K., Rish, I.: Learning sparse Gaussian Markov networks using a greedy coordinate ascent approach. In: Balcazar, J.L., Bonchi, F., Gionis, A., Sebag, M. (eds.) Machine Learning and Knowledge Discovery in Databases. Lecture Notes in Computer Science 6323, pp. 196–212. Springer, Berlin (2010)
Sun, D.: The strong second order sufficient condition and constraint nondegeneracy in nonlinear semidefinite programming and their implications. Math. Oper. Res. 31, 761–776 (2006)
Toh, K.-C.: Primal-dual path-following algorithms for determinant maximization problems with linear matrix inequalities. Comput. Optim. Appl. 14, 309–330 (1999)
Toh, K.-C.: An inexact primal-dual path following algorithm for convex quadratic SDP. Math. Program. 112, 221–254 (2008)
Tütüncü, R.H., Toh, K.-C., Todd, M.J.: Solving semidefinite-quadratic-linear programs using SDPT3. Math. Program. 95, 189–217 (2003)
Toh, K.-C., Tütüncü, R.H., Todd, M.J.: Inexact primal-dual path-following algorithms for a special class of convex quadratic SDP and related problems. Pac. J. Optim. 3, 135–164 (2007)
Varadarajan, B., Povey, D., Chu, S.M.: Quick fmllr for speaker adaptation in speech recognition. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2008)
Wang, C., Sun, D., Toh, K.-C.: Solving log-determinant optimization problems by a Newton-CG primal proximal point algorithm. SIAM J. Optim. 20, 2994–3013 (2010)
Yang, J., Sun, D., Toh, K.-C.: A proximal point algorithm for log-determinant optimization with group Lasso regularization. SIAM J. Optim. 23, 857–893 (2013)
Yang, S., Shen, X., Wonka, P., Lu, Z., Ye, J.: Fused multiple graphical Lasso. http://people.math.sfu.ca/~zhaosong/ResearchPapers/FMGL
Yuan, X.: Alternating direction methods for sparse covariance selection. J. Sci. Comput. 51, 261–273 (2012)
Zhao, X.-Y.: A Semismooth Newton-CG augmented Lagrangian method for large scale linear and convex quadratic SDPs. PhD thesis, National University of Singapore (2009)
Zhao, X.-Y., Sun, D., Toh, K.-C.: A Newton-CG augmented Lagrangian method for semidefinite programming. SIAM J. Optim. 20, 1737–1765 (2010)
Acknowledgments
I sincerely appreciate the Institute for Mathematical Sciences, National University of Singapore for supporting me to visit the institute and attend the workshop “Optimization: Computation, Theory and Modeling” in 2012 so that I can have a good opportunity to have fruitful discussions with Professors Defeng Sun and Kim-Chuan Toh. I appreciate Dr. Xinyuan Zhao in Beijing University of Technology for many discussions on this topic. I also appreciate the two anonymous referees and the editor for their helpful comments and suggestions, which improved the quality of this paper. The author’s research was supported by the National Natural Science Foundation of China under Grant 11201382, the Youth Fund of Humanities and Social Sciences of the Ministry of Education under Grant 12YJC910008, the project of the science and technology department of Sichuan province under Grant 2012ZR0154, and the Fundamental Research Funds for the Central Universities under Grants SWJTU12CX055 and SWJTU12ZT15.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Wang, C. On how to solve large-scale log-determinant optimization problems. Comput Optim Appl 64, 489–511 (2016). https://doi.org/10.1007/s10589-015-9812-y
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10589-015-9812-y