Spectral operators of matrices

Abstract

The class of matrix optimization problems (MOPs) has been recognized in recent years to be a powerful tool to model many important applications involving structured low rank matrices within and beyond the optimization community. This trend can be credited to some extent to the exciting developments in emerging fields such as compressed sensing. The Löwner operator, which generates a matrix valued function via applying a single-variable function to each of the singular values of a matrix, has played an important role for a long time in solving matrix optimization problems. However, the classical theory developed for the Löwner operator has become inadequate in these recent applications. The main objective of this paper is to provide necessary theoretical foundations from the perspectives of designing efficient numerical methods for solving MOPs. We achieve this goal by introducing and conducting a thorough study on a new class of matrix valued functions, coined as spectral operators of matrices. Several fundamental properties of spectral operators, including the well-definedness, continuity, directional differentiability and Fréchet-differentiability are systematically studied.

This is a preview of subscription content, log in to check access.

Notes

  1. 1.

    Note that Definition 1 is different from the property \((\mathcal{E})\) used in [29, Definition 2.2] for the special Hermitian/symmetric case, i.e., \(\mathcal{X}={\mathbb S}^{m_1}\). The conditions used in [29, Definition 2.1 & 2.2] do not seem to be proper ones for studying spectral operators. For instance, consider the function \(f:{\mathbb {R}}^2\rightarrow {\mathbb {R}}^2\) defined by \(f(x)=x^{\downarrow }\) for \(x\in {\mathbb {R}}^2\), where \(x^{\downarrow }\) is the vector of entries of x being arranged in the non-increasing order, i.e., \(x^{\downarrow }_1\ge x^{\downarrow }_2\). Clearly, f satisfies [29, Definition 2.1 & 2.2] and f is not differentiable at x with \(x_1=x_2\). However, the corresponding matrix function \(F(X)=X\) is differentiable on \(\mathcal{S}^2\), which implies that [29, Corollary 4.2] is incorrect.

References

  1. 1.

    Bhatia, R.: Matrix Analysis. Springer, New York (1997)

    Google Scholar 

  2. 2.

    Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9, 717–772 (2008)

    MathSciNet  Article  MATH  Google Scholar 

  3. 3.

    Candès, E.J., Tao, T.: The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inf. Theory 56, 2053–2080 (2009)

    MathSciNet  Article  MATH  Google Scholar 

  4. 4.

    Candès, E.J., Li, X., Ma, Y., Wright, J.: Robust principal component analysis? J. ACM 58, 11 (2011)

    MathSciNet  Article  MATH  Google Scholar 

  5. 5.

    Chan, Z.X., Sun, D.F.: Constraint nondegeneracy, strong regularity, and nonsingularity in semidefinite programming. SIAM J. Optim. 19, 370–396 (2008)

    MathSciNet  Article  MATH  Google Scholar 

  6. 6.

    Chandrasekaran, V., Sanghavi, S., Parrilo, P.A., Willsky, A.: Rank-sparsity incoherence for matrix decomposition. SIAM J. Optim. 21, 572–596 (2011)

    MathSciNet  Article  MATH  Google Scholar 

  7. 7.

    Chen, C.H., Liu, Y.J., Sun, D.F., Toh, K.C.: A semismooth Newton-CG dual proximal point algorithm for matrix spectral norm approximation problems. Math. Program. 155, 435–470 (2016)

    MathSciNet  Article  MATH  Google Scholar 

  8. 8.

    Chen, X., Qi, H.D., Tseng, P.: Analysis of nonsmooth symmetric-matrix-valued functions with applications to semidefinite complement problems. SIAM J. Optim. 13, 960–985 (2003)

    MathSciNet  Article  MATH  Google Scholar 

  9. 9.

    Chu, M., Funderlic, R., Plemmons, R.: Structured low rank approximation. Linear Algebra Appl. 366, 157–172 (2003)

    MathSciNet  Article  MATH  Google Scholar 

  10. 10.

    Demyanov, V.F., Rubinov, A.M.: On quasidifferentiable mappings. Optimization 14, 3–21 (1983)

    MATH  Google Scholar 

  11. 11.

    Ding, C.: An introduction to a class of matrix optimization problems. PhD thesis, National University of Singapore. http://www.math.nus.edu.sg/~matsundf/DingChao_Thesis_final.pdf (2012)

  12. 12.

    Ding, C., Sun, D.F., Toh, K.C.: An introduction to a class of matrix cone programming. Math. Program. 144, 141–179 (2014)

    MathSciNet  Article  MATH  Google Scholar 

  13. 13.

    Ding, C., Sun, D.F., Ye, J.J.: First order optimality conditions for mathematical programs with semidefinite cone complementarity constraints. Math. Program. 147, 539–579 (2014)

    MathSciNet  Article  MATH  Google Scholar 

  14. 14.

    Dobrynin, V.: On the rank of a matrix associated with a graph. Discrete Math. 276, 169–175 (2004)

    MathSciNet  Article  MATH  Google Scholar 

  15. 15.

    Flett, T.M.: Differential Analysis. Cambridge University Press, Cambridge (1980)

    Google Scholar 

  16. 16.

    Greenbaum, A., Trefethen, L.N.: GMRES/CR and Arnoldi/Lanczos as matrix approximation problems. SIAM J. Sci. Comput. 15, 359–368 (1994)

    MathSciNet  Article  MATH  Google Scholar 

  17. 17.

    Kotlov, A., Lovász, L., Vempala, S.: The Colin de Verdière number and sphere representations of a graph. Combinatorica 17, 483–521 (1997)

    MathSciNet  Article  MATH  Google Scholar 

  18. 18.

    Lewis, A.S.: The convex analysis of unitarily invariant matrix functions. J. Convex Anal. 2, 173–183 (1995)

    MathSciNet  MATH  Google Scholar 

  19. 19.

    Lewis, A.S.: Derivatives of spectral functions. Math. Oper. Res. 21, 576–588 (1996)

    MathSciNet  Article  MATH  Google Scholar 

  20. 20.

    Lewis, A.S., Overton, M.L.: Eigenvalue optimization. Acta Numer. 5, 149–190 (1996)

    MathSciNet  Article  MATH  Google Scholar 

  21. 21.

    Lewis, A.S., Sendov, H.S.: Twice differentiable spectral functions. SIAM J. Matrix Anal. Appl. 23, 368–386 (2001)

    MathSciNet  Article  MATH  Google Scholar 

  22. 22.

    Lewis, A.S., Sendov, H.S.: Nonsmooth analysis of singular values. Part I: theory. Set-Valued Anal. 13, 213–241 (2005)

    MathSciNet  Article  MATH  Google Scholar 

  23. 23.

    Lewis, A.S., Sendov, H.S.: Nonsmooth analysis of singular values. Part II: application. Set-Valued Anal. 13, 243–264 (2005)

    MathSciNet  Article  MATH  Google Scholar 

  24. 24.

    Liu, Y.J., Sun, D.F., Toh, K.C.: An implementable proximal point algorithmic framework for nuclear norm minimization. Math. Program. 133, 399–436 (2012)

    MathSciNet  Article  MATH  Google Scholar 

  25. 25.

    Löwner, K.: Über monotone matrixfunktionen. Math. Z. 38, 177–216 (1934)

    MathSciNet  Article  MATH  Google Scholar 

  26. 26.

    Lovász, L.: On the Shannon capacity of a graph. IEEE Trans. Inf. Theory 25, 1–7 (1979)

    MathSciNet  Article  MATH  Google Scholar 

  27. 27.

    Miao, W.M., Sun, D.F., Pan, S.H.: A rank-corrected procedure for matrix completion with fixed basis coefficients. Math. Program. 159, 289–338 (2016)

    MathSciNet  Article  MATH  Google Scholar 

  28. 28.

    Mifflin, R.: Semismooth and semiconvex functions in constrained optimization. SIAM J. Control Optim. 15, 959–972 (1977)

    MathSciNet  Article  MATH  Google Scholar 

  29. 29.

    Mohebi, H., Salemi, A.: Analysis of symmetric matrix valued functions. Numer. Funct. Anal. Optim. 28, 691–715 (2007)

    MathSciNet  Article  MATH  Google Scholar 

  30. 30.

    Mordukhovich, B.S., Nghia, T.T.A., Rockafellar, R.T.: Full stability in finite-dimensional optimization. Math. Oper. Res. 40, 226–252 (2015)

    MathSciNet  Article  MATH  Google Scholar 

  31. 31.

    Moreau, J.-J.: Proximité et dualité dans un espace hilbertien. Bull. Soc. Math. Fr. 93, 1067–1070 (1965)

    MATH  Google Scholar 

  32. 32.

    Nashed, M.Z.: Differentiability and related properties of nonlinear operators: some aspects of the role of differentials in nonlinear functional analysis. In: Rall, L.B. (ed.) Nonlinear Functional Analysis and Applications, pp. 103–309. Academic Press, New York (1971)

    Google Scholar 

  33. 33.

    Ortega, J.M., Rheinboldt, W.C.: Iterative Solution of Nonlinear Equations in Several Variables. SIAM, Philadelphia (1970)

    Google Scholar 

  34. 34.

    Qi, H.D., Yang, X.Q.: Semismoothness of spectral functions. SIAM J. Matrix Anal. Appl. 25, 766–783 (2003)

    MathSciNet  Article  MATH  Google Scholar 

  35. 35.

    Qi, L., Sun, J.: A nonsmooth version of Newton’s method. Math. Program. 58, 353–367 (1993)

    MathSciNet  Article  MATH  Google Scholar 

  36. 36.

    Recht, B., Fazel, M., Parrilo, P.A.: Guaranteed minimum rank solutions to linear matrix equations via nuclear norm minimization. SIAM Rev. 52, 471–501 (2010)

    MathSciNet  Article  MATH  Google Scholar 

  37. 37.

    Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)

    Google Scholar 

  38. 38.

    Sun, D.F.: The strong second order sufficient condition and constraint nondegeneracy in nonlinear semidefinite programming and their implications. Math. Oper. Res. 31, 761–776 (2006)

    MathSciNet  Article  MATH  Google Scholar 

  39. 39.

    Sun, D.F., Sun, J.: Semismooth matrix-valued functions. Math. Oper. Res. 27, 150–169 (2002)

    MathSciNet  Article  MATH  Google Scholar 

  40. 40.

    Sun, D.F., Sun, J.: Löwner’s operator and spectral functions in Euclidean Jordan algebras. Math. Oper. Res. 33, 421–445 (2008)

    MathSciNet  Article  MATH  Google Scholar 

  41. 41.

    Todd, M.J.: Semidefinite optimization. Acta Numer. 10, 515–560 (2001)

    MathSciNet  Article  MATH  Google Scholar 

  42. 42.

    Toh, K.C.: GMRES vs. ideal GMRES. SIAM J. Matrix Anal. Appl. 18, 30–36 (1997)

    MathSciNet  Article  MATH  Google Scholar 

  43. 43.

    Toh, K.C., Trefethen, L.N.: The Chebyshev polynomials of a matrix. SIAM J. Matrix Anal. Appl. 20, 400–419 (1998)

    MathSciNet  Article  MATH  Google Scholar 

  44. 44.

    Wright, J., Ma, Y., Ganesh, A., Rao, S.: Robust principal component analysis: exact recovery of corrupted low-rank matrices via convex optimization. In: Bengio, Y., Schuurmans, D., Lafferty, J., Williams, C. (eds.), Advances in Neural Information Processing Systems 22 (2009)

  45. 45.

    Wu, B., Ding, C., Sun, D.F., Toh, K.C.: On the Moreau–Yosida regularization of the vector k-norm related functions. SIAM J. Optim. 24, 766–794 (2014)

    MathSciNet  Article  MATH  Google Scholar 

  46. 46.

    Yang, L.Q., Sun, D.F., Toh, K.C.: SDPNAL\(+\): a majorized semismooth Newton-CG augmented Lagrangian method for semidefinite programming with nonnegative constraints. Math. Program. Comput. 7, 331–366 (2015)

    MathSciNet  Article  MATH  Google Scholar 

  47. 47.

    Yang, Z.: A study on nonsymmetric matrix-valued functions. Master’s Thesis, National University of Singapore. http://www.math.nus.edu.sg/~matsundf/Main_YZ.pdf (2009)

  48. 48.

    Zhao, X.Y., Sun, D.F., Toh, K.C.: A Newton-CG augmented Lagrangian method for semidefinite programming. SIAM J. Optim. 20, 1737–1765 (2010)

    MathSciNet  Article  MATH  Google Scholar 

Download references

Acknowledgements

We would like to thank the referees as well as the editors for their constructive comments that have helped to improve the quality of the paper. The research of C. Ding was supported by the National Natural Science Foundation of China under projects No. 11301515, No. 11671387 and No. 11531014.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Defeng Sun.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Ding, C., Sun, D., Sun, J. et al. Spectral operators of matrices. Math. Program. 168, 509–531 (2018). https://doi.org/10.1007/s10107-017-1162-3

Download citation

Keywords

  • Spectral operators
  • Directional differentiability
  • Fréchet differentiability
  • Matrix valued functions
  • Proximal mappings

Mathematics Subject Classification

  • 90C25
  • 90C06
  • 65K05
  • 49J50
  • 49J52