Abstract
In this paper we study optimization problems involving eigenvalues of symmetric matrices. One of the difficulties with numerical analysis of such problems is that the eigenvalues, considered as functions of a symmetric matrix, are not differentiable at those points where they coalesce. Here we apply the \(\mathcal{U}\)-Lagrangian theory to a class of D.C. functions (the difference of two convex functions): the arbitrary eigenvalue function λ i , with affine matrix-valued mappings, where λ i is a D.C. function. We give the first-and second-order derivatives of \({\mathcal{U}}\)-Lagrangian in the space of decision variables R m when transversality condition holds. Moreover, an algorithm framework with quadratic convergence is presented. Finally, we present an application: low rank matrix optimization; meanwhile, list its \(\mathcal{VU}\) decomposition results.
Similar content being viewed by others
References
Apkarian, P., Noll, D., Thevenet, J.-B., Tuan, H.D.: A spectral quadratic-SDP method with applications to fixed-order H 2 and H ∞ synthesis. Eur. J. Control 10, 527–538 (2004)
Bonnans, J.F., Shapiro, A.: Perturbation Analysis of Optimization Problems. Springer, New York (2000)
Clarke, F.H., Ledyaev, Y.S., Stern, R.J., Wolenski, P.R.: Nonsmooth Analysis and Control Theory. Springer, New York (1998)
Cox, S., Lipton, R.: Extremal eigenvalue problems for two-phase conductors. Arch. Ration. Mech. Anal. 136, 101–117 (1996)
Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9, 717–772 (2009)
Candès, E.J., Tao, T.: The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inf. Theory 56(5), 2053–2080 (2010)
Diaaz, A.R., Kikuchi, N.: Solution to shape and topology eigenvalue optimization problems using a homogenization method. J. Numer. Methods Eng. 35(7), 1487–1502 (2005)
Demyanov, V.F., Rubinov, A.M.: Constructive Nonsmooth Analysis, Approximation & Optimization No. 7. Peter Lang, Frankfurt (1995)
Dinh, T.P., Le Thi, H.A.: A DC optimization algorithm for solving the trust region subproblem. SIAM J. Optim. 8(2), 476–505 (1998)
Dinh, T.P., Le Thi, H.A.: The DC programming and DCA revisited with DC models of real world nonconvex optimization problems. Ann. Oper. Res. 133, 23–46 (2005)
Gross, D.: Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. Inf. Theory 57(3), 1548–1566 (2011)
Gross, D., Liu, Y.-K., Flammia, S.T., Becker, S., Eisert, J.: Quantum state tomography via compressed sensing. Phys. Rev. Lett. 105, 150401 (2010)
Helmberg, C., Rendl, F., Weismantel, R.: A semidefinite programming approach to the quadratic knapsack problem. J. Comb. Optim. 4, 197–215 (2000)
Hiriart-Urruty, J.-B., Lemaréchal, C.: Convex Analysis and Minimization Algorithms. Springer, Berlin (1993)
Hiriart-Urruty, J.-B., Ye, D.: Sensitivity analysis of all eigenvalues of a symmetric matrix. Numer. Math. 70, 45–72 (1995)
Lu, Y., Pang, L.P., Xia, Z.Q.: A \(\mathcal{VU}\)-decomposition method for a second-order cone programming problem. Appl. Math. Mech. 31(2), 263–270 (2010)
Lu, Y., Pang, L.P., Guo, F.F., Xia, Z.Q.: A superlinear space decomposition algorithm for constrained nonsmooth convex program. J. Comput. Appl. Math. 234(1), 224–232 (2010)
Lemaréchal, C., Oustry, F., Sagastizábal, C.: The \({\mathcal{U}}\)-Lagrangian of a convex function. Trans. Am. Math. Soc. 352(2), 711–729 (2000)
Lemaréchal, C., Sagastizábal, C.: More than first-order developments of convex function: primal-dual relations. J. Convex Anal. 3(2), 255–268 (1996)
Lewis, A.S., Overton, M.L.: Eigenvalue optimization. Acta Numer. 5, 149–190 (1996)
Mohan, K., Fazel, M.: Reweighted nuclear norm minimization with application to system identification. In: Proceedings of the American Control Conference, pp. 2953–2959. IEEE Press, Piscataway (2010)
Mifflin, R., Sagastizábal, C.: On \(\mathcal{VU}\)-theory for functions with primal-dual gradient structure. SIAM J. Optim. 11(2), 547–571 (2000)
Mifflin, R., Sagastizábal, C.: \(\mathcal{VU}\)-smoothness and proximal point results for some nonconvex functions. Optim. Methods Softw. 19(5), 463–478 (2004)
Mifflin, R., Sagastizábal, C.: A \(\mathcal{VU}\)-algorithm for convex minimization. Math. Program., Ser. B 104, 583–608 (2005)
Noll, D., Apkarian, P.: Spectral bundle method for non-convex maximum eigenvalue functions: first-order methods. Math. Program., Ser. B 104, 701–727 (2005)
Noll, D., Apkarian, P.: Spectral bundle method for non-convex maximum eigenvalue functions: second-order methods. Math. Program., Ser. B 104, 729–747 (2005)
Niu, Y.S., Dinh, T.P., Le Thi, H.A., Judice, J.J.: Efficient DC programming approaches for the asymmetric eigenvalue complementarity problem. Optim. Methods Softw. 28(4), 812–829 (2013)
Noll, D., Torki, M., Apkarian, P.: Partially augmented Lagrangian method for matrix inequality constraints. SIAM J. Optim. 15(1), 161–184 (2004)
Oustry, F.: The \({\mathcal{U}}\)-Lagrangian of the maximum eigenvalue functions. SIAM J. Optim. 9(2), 526–549 (1999)
Oustry, F.: A second-order bundle method to minimize the maximum eigenvalue function. Math. Program., Ser. A 89, 1–33 (2000)
Overton, M.L.: Large-scale optimization of eigenvalues. SIAM J. Optim. 2(1), 88–120 (1992)
Overton, M.L., Womersley, R.S.: Optimality conditions and duality theory for minimizing sums of the largest eigenvalues of symmetric matrices. Math. Program. 62, 321–357 (1993)
Overton, M.L., Womersley, R.S.: Second derivatives for optimizing eigenvalues of symmetric matrices. SIAM J. Matrix Anal. Appl. 16(3), 697–718 (1995)
Recht, B.: A simpler approach to matrix completion. J. Mach. Learn. Res. 12, 3413–3430 (2011)
Rockafellar, R.T.: Convex Analysis. Princeton University Press, New Jersey (1970)
Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis. Springer, Berlin (1998)
Shapiro, A., Fan, M.K.H.: On eigenvalue optimization. SIAM J. Optim. 5(3), 552–556 (1995)
Shapiro, A.: First and second order analysis of nonlinear semidefinite programs. Math. Program., Ser. B 77, 301–320 (1997)
Thiao, M., Dinh, T.P., Le Thi, H.A.: A DC programming approach for sparse eigenvalue problem. In: Proceedings of the 27-th International Conference on Machine Learning, Haifa, Israel (2010)
Vandenberghe, L., Boyd, S.: Semidefinite programming. SIAM Rev. 38(1), 49–95 (1996)
Zhao, Z., Braams, B.Z., Fukuda, M., Overton, M.L., Percus, J.K.: The reduced density matrix method for electronic structure calculations and the role of three-index representability conditions. J. Chem. Phys. 120(5), 2095–2104 (2004)
Acknowledgements
We would like to thank Xi-jun Liang and Yue Lu from Dalian University of Technology and Yuan Lu from Shenyang University for numerous fruitful discussions. The authors also thank two anonymous referees for a number of valuable and constructive suggestions that helped to improved the presentation. This research was supported in part by the Natural Science Foundation of China, Grant 11171049, 11226230 and 11301347, and General Project of the Education Department of Liaoning Province L2012427.
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
Next we will solve the (4.5) of Proposition 4.1.
We denote \(W_{i} = Q^{T}_{1} (A(\hat{x})) A_{i} Q_{1} (A(\hat{x})), i=1,\ldots,n\), so the above formula can be converted to
The objective function \(= \sum^{n}_{i=1} {\langle Z, W_{i}\rangle }^{2}\). Hence the problem becomes
According to related content about matrix analysis, S n is isomorphic to \(\mathbb{R}^{\frac{n(n+1)}{2}}\) via the map svec(A) defined by stacking the columns of the lower triangle of A on top of each other and multiplying the off-diagonal elements with \(\sqrt{2}\),
The factor \(\sqrt{2}\) for off-diagonal elements ensures that, for A,B∈S n ,
So we have 〈Z,W i 〉=svec(Z)Tsvec(W i ). Set z=svec(Z),w i =svec(W i ),e=svec(I q−p ). Thus the above problem can be equivalently transformed into
The problem is the quadratic programming about z, which can be solved through KKT condition. Its Lagrangian function is
where \(W:= \sum^{n}_{i=1} w_{i} w^{T}_{i}\), the K-T pair (z ∗,μ ∗) for the convex quadratic programming should satisfy the following condition
i.e.,
In fact, since z is dependent on the variable \(\hat{x}\), and, according to the result of the previous proof, we get the solution of Z about \(\hat{x}\) is unique, so the solution of z about \(\hat{x}\) also is single. So this coefficient matrix is invertible in above expression, making using of the method for solving equality-constrained quadratic programming we can solve our problem model.
Rights and permissions
About this article
Cite this article
Huang, M., Pang, LP. & Xia, ZQ. The space decomposition theory for a class of eigenvalue optimizations. Comput Optim Appl 58, 423–454 (2014). https://doi.org/10.1007/s10589-013-9624-x
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10589-013-9624-x