Skip to main content
Log in

The space decomposition theory for a class of eigenvalue optimizations

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

In this paper we study optimization problems involving eigenvalues of symmetric matrices. One of the difficulties with numerical analysis of such problems is that the eigenvalues, considered as functions of a symmetric matrix, are not differentiable at those points where they coalesce. Here we apply the \(\mathcal{U}\)-Lagrangian theory to a class of D.C. functions (the difference of two convex functions): the arbitrary eigenvalue function λ i , with affine matrix-valued mappings, where λ i is a D.C. function. We give the first-and second-order derivatives of \({\mathcal{U}}\)-Lagrangian in the space of decision variables R m when transversality condition holds. Moreover, an algorithm framework with quadratic convergence is presented. Finally, we present an application: low rank matrix optimization; meanwhile, list its \(\mathcal{VU}\) decomposition results.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Apkarian, P., Noll, D., Thevenet, J.-B., Tuan, H.D.: A spectral quadratic-SDP method with applications to fixed-order H 2 and H synthesis. Eur. J. Control 10, 527–538 (2004)

    Article  MathSciNet  Google Scholar 

  2. Bonnans, J.F., Shapiro, A.: Perturbation Analysis of Optimization Problems. Springer, New York (2000)

    Book  MATH  Google Scholar 

  3. Clarke, F.H., Ledyaev, Y.S., Stern, R.J., Wolenski, P.R.: Nonsmooth Analysis and Control Theory. Springer, New York (1998)

    MATH  Google Scholar 

  4. Cox, S., Lipton, R.: Extremal eigenvalue problems for two-phase conductors. Arch. Ration. Mech. Anal. 136, 101–117 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  5. Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9, 717–772 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  6. Candès, E.J., Tao, T.: The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inf. Theory 56(5), 2053–2080 (2010)

    Article  Google Scholar 

  7. Diaaz, A.R., Kikuchi, N.: Solution to shape and topology eigenvalue optimization problems using a homogenization method. J. Numer. Methods Eng. 35(7), 1487–1502 (2005)

    Article  Google Scholar 

  8. Demyanov, V.F., Rubinov, A.M.: Constructive Nonsmooth Analysis, Approximation & Optimization No. 7. Peter Lang, Frankfurt (1995)

    Google Scholar 

  9. Dinh, T.P., Le Thi, H.A.: A DC optimization algorithm for solving the trust region subproblem. SIAM J. Optim. 8(2), 476–505 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  10. Dinh, T.P., Le Thi, H.A.: The DC programming and DCA revisited with DC models of real world nonconvex optimization problems. Ann. Oper. Res. 133, 23–46 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  11. Gross, D.: Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. Inf. Theory 57(3), 1548–1566 (2011)

    Article  Google Scholar 

  12. Gross, D., Liu, Y.-K., Flammia, S.T., Becker, S., Eisert, J.: Quantum state tomography via compressed sensing. Phys. Rev. Lett. 105, 150401 (2010)

    Article  Google Scholar 

  13. Helmberg, C., Rendl, F., Weismantel, R.: A semidefinite programming approach to the quadratic knapsack problem. J. Comb. Optim. 4, 197–215 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  14. Hiriart-Urruty, J.-B., Lemaréchal, C.: Convex Analysis and Minimization Algorithms. Springer, Berlin (1993)

    Google Scholar 

  15. Hiriart-Urruty, J.-B., Ye, D.: Sensitivity analysis of all eigenvalues of a symmetric matrix. Numer. Math. 70, 45–72 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  16. Lu, Y., Pang, L.P., Xia, Z.Q.: A \(\mathcal{VU}\)-decomposition method for a second-order cone programming problem. Appl. Math. Mech. 31(2), 263–270 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  17. Lu, Y., Pang, L.P., Guo, F.F., Xia, Z.Q.: A superlinear space decomposition algorithm for constrained nonsmooth convex program. J. Comput. Appl. Math. 234(1), 224–232 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  18. Lemaréchal, C., Oustry, F., Sagastizábal, C.: The \({\mathcal{U}}\)-Lagrangian of a convex function. Trans. Am. Math. Soc. 352(2), 711–729 (2000)

    Article  MATH  Google Scholar 

  19. Lemaréchal, C., Sagastizábal, C.: More than first-order developments of convex function: primal-dual relations. J. Convex Anal. 3(2), 255–268 (1996)

    MATH  MathSciNet  Google Scholar 

  20. Lewis, A.S., Overton, M.L.: Eigenvalue optimization. Acta Numer. 5, 149–190 (1996)

    Article  MathSciNet  Google Scholar 

  21. Mohan, K., Fazel, M.: Reweighted nuclear norm minimization with application to system identification. In: Proceedings of the American Control Conference, pp. 2953–2959. IEEE Press, Piscataway (2010)

    Google Scholar 

  22. Mifflin, R., Sagastizábal, C.: On \(\mathcal{VU}\)-theory for functions with primal-dual gradient structure. SIAM J. Optim. 11(2), 547–571 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  23. Mifflin, R., Sagastizábal, C.: \(\mathcal{VU}\)-smoothness and proximal point results for some nonconvex functions. Optim. Methods Softw. 19(5), 463–478 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  24. Mifflin, R., Sagastizábal, C.: A \(\mathcal{VU}\)-algorithm for convex minimization. Math. Program., Ser. B 104, 583–608 (2005)

    Article  MATH  Google Scholar 

  25. Noll, D., Apkarian, P.: Spectral bundle method for non-convex maximum eigenvalue functions: first-order methods. Math. Program., Ser. B 104, 701–727 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  26. Noll, D., Apkarian, P.: Spectral bundle method for non-convex maximum eigenvalue functions: second-order methods. Math. Program., Ser. B 104, 729–747 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  27. Niu, Y.S., Dinh, T.P., Le Thi, H.A., Judice, J.J.: Efficient DC programming approaches for the asymmetric eigenvalue complementarity problem. Optim. Methods Softw. 28(4), 812–829 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  28. Noll, D., Torki, M., Apkarian, P.: Partially augmented Lagrangian method for matrix inequality constraints. SIAM J. Optim. 15(1), 161–184 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  29. Oustry, F.: The \({\mathcal{U}}\)-Lagrangian of the maximum eigenvalue functions. SIAM J. Optim. 9(2), 526–549 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  30. Oustry, F.: A second-order bundle method to minimize the maximum eigenvalue function. Math. Program., Ser. A 89, 1–33 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  31. Overton, M.L.: Large-scale optimization of eigenvalues. SIAM J. Optim. 2(1), 88–120 (1992)

    Article  MATH  MathSciNet  Google Scholar 

  32. Overton, M.L., Womersley, R.S.: Optimality conditions and duality theory for minimizing sums of the largest eigenvalues of symmetric matrices. Math. Program. 62, 321–357 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  33. Overton, M.L., Womersley, R.S.: Second derivatives for optimizing eigenvalues of symmetric matrices. SIAM J. Matrix Anal. Appl. 16(3), 697–718 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  34. Recht, B.: A simpler approach to matrix completion. J. Mach. Learn. Res. 12, 3413–3430 (2011)

    MATH  MathSciNet  Google Scholar 

  35. Rockafellar, R.T.: Convex Analysis. Princeton University Press, New Jersey (1970)

    MATH  Google Scholar 

  36. Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis. Springer, Berlin (1998)

    Book  MATH  Google Scholar 

  37. Shapiro, A., Fan, M.K.H.: On eigenvalue optimization. SIAM J. Optim. 5(3), 552–556 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  38. Shapiro, A.: First and second order analysis of nonlinear semidefinite programs. Math. Program., Ser. B 77, 301–320 (1997)

    MATH  Google Scholar 

  39. Thiao, M., Dinh, T.P., Le Thi, H.A.: A DC programming approach for sparse eigenvalue problem. In: Proceedings of the 27-th International Conference on Machine Learning, Haifa, Israel (2010)

    Google Scholar 

  40. Vandenberghe, L., Boyd, S.: Semidefinite programming. SIAM Rev. 38(1), 49–95 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  41. Zhao, Z., Braams, B.Z., Fukuda, M., Overton, M.L., Percus, J.K.: The reduced density matrix method for electronic structure calculations and the role of three-index representability conditions. J. Chem. Phys. 120(5), 2095–2104 (2004)

    Article  Google Scholar 

Download references

Acknowledgements

We would like to thank Xi-jun Liang and Yue Lu from Dalian University of Technology and Yuan Lu from Shenyang University for numerous fruitful discussions. The authors also thank two anonymous referees for a number of valuable and constructive suggestions that helped to improved the presentation. This research was supported in part by the Natural Science Foundation of China, Grant 11171049, 11226230 and 11301347, and General Project of the Education Department of Liaoning Province L2012427.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Li-Ping Pang.

Appendix

Appendix

Next we will solve the (4.5) of Proposition 4.1.

$$\begin{aligned} \mathcal{A}^{\ast} \bigl( Q_1 \bigl(A(\hat{x}) \bigr) Z Q^{T}_1 \bigl(A(\hat{x})\bigr) \bigr) = & \begin{bmatrix} \langle Q_1 (A(\hat{x})) Z Q^{T}_1 (A(\hat{x})), A_1\rangle\\ \vdots\\ \langle Q_1 (A(\hat{x})) Z Q^{T}_1 (A(\hat{x})), A_n\rangle\\ \end{bmatrix} \\ =& \begin{bmatrix} \langle Z, Q^{T}_1 (A(\hat{x})) A_1 Q_1 (A(\hat{x})) \rangle\\ \vdots\\ \langle Z, Q^{T}_1 (A(\hat{x})) A_n Q_1 (A(\hat{x})) \rangle\\ \end{bmatrix} \end{aligned}$$

We denote \(W_{i} = Q^{T}_{1} (A(\hat{x})) A_{i} Q_{1} (A(\hat{x})), i=1,\ldots,n\), so the above formula can be converted to

$$ \mathcal{A}^{\ast} \bigl( Q_1 \bigl(A(\hat{x})\bigr) Z Q^{T}_1 \bigl(A(\hat{x})\bigr) \bigr) = \begin{bmatrix} \langle Z, W_1\rangle\\ \vdots\\ \langle Z, W_n\rangle\\ \end{bmatrix} . $$

The objective function \(= \sum^{n}_{i=1} {\langle Z, W_{i}\rangle }^{2}\). Hence the problem becomes

$$\begin{aligned} \min &\quad \sum^n_{i=1} { \langle Z, W_i\rangle}^2 \\ \textrm{s.t.} &\quad \langle Z, I_{q-p}\rangle=1. \end{aligned}$$

According to related content about matrix analysis, S n is isomorphic to \(\mathbb{R}^{\frac{n(n+1)}{2}}\) via the map svec(A) defined by stacking the columns of the lower triangle of A on top of each other and multiplying the off-diagonal elements with \(\sqrt{2}\),

$$ \textrm{svec}(A):=[a_{11},\sqrt{2}a_{21},\ldots,\sqrt {2}a_{n1}, a_{22},\sqrt{2}a_{32}, \ldots,a_{nn}]^T. $$

The factor \(\sqrt{2}\) for off-diagonal elements ensures that, for A,BS n ,

$$ \langle A, B\rangle= \textrm{tr}(AB) =\textrm{svec}(A)^T \textrm{svec}(B). $$

So we have 〈Z,W i 〉=svec(Z)Tsvec(W i ). Set z=svec(Z),w i =svec(W i ),e=svec(I qp ). Thus the above problem can be equivalently transformed into

$$\begin{aligned} \min &\quad z^T \Biggl[\sum ^n_{i=1} w_i w^{T}_i \Biggr] z \\ \textrm{s.t.} &\quad e^T z =1. \end{aligned}$$

The problem is the quadratic programming about z, which can be solved through KKT condition. Its Lagrangian function is

$$ L(z,\mu)= z^T W z + \mu\bigl(e^T z - 1\bigr), $$

where \(W:= \sum^{n}_{i=1} w_{i} w^{T}_{i}\), the K-T pair (z ,μ ) for the convex quadratic programming should satisfy the following condition

$$ W z^{\ast} +\mu^{\ast} e = 0, \quad e^T z^{\ast} =1, $$

i.e.,

$$ \left (\begin{array}{c@{\quad}c} W & e \\ e^T & 0 \end{array} \right ) \begin{pmatrix} z^{\ast} \\ \mu^{\ast} \\ \end{pmatrix} = \begin{pmatrix} 0 \\ 1 \\ \end{pmatrix} . $$

In fact, since z is dependent on the variable \(\hat{x}\), and, according to the result of the previous proof, we get the solution of Z about \(\hat{x}\) is unique, so the solution of z about \(\hat{x}\) also is single. So this coefficient matrix is invertible in above expression, making using of the method for solving equality-constrained quadratic programming we can solve our problem model.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Huang, M., Pang, LP. & Xia, ZQ. The space decomposition theory for a class of eigenvalue optimizations. Comput Optim Appl 58, 423–454 (2014). https://doi.org/10.1007/s10589-013-9624-x

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-013-9624-x

Keywords

Navigation