Mathematical Programming

, Volume 144, Issue 1–2, pp 181–226 | Cite as

A unified approach for minimizing composite norms

Full Length Paper Series A


We propose a first-order augmented Lagrangian algorithm (FALC) to solve the composite norm minimization problem
$$\begin{aligned} \begin{array}{ll} \min \limits _{X\in \mathbb{R }^{m\times n}}&\mu _1\Vert \sigma (\mathcal{F }(X)-G)\Vert _\alpha +\mu _2\Vert \mathcal{C }(X)-d\Vert _\beta ,\\ \text{ subject} \text{ to}&\mathcal{A }(X)-b\in \mathcal{Q }, \end{array} \end{aligned}$$
where \(\sigma (X)\) denotes the vector of singular values of \(X \in \mathbb{R }^{m\times n}\), the matrix norm \(\Vert \sigma (X)\Vert _{\alpha }\) denotes either the Frobenius, the nuclear, or the \(\ell _2\)-operator norm of \(X\), the vector norm \(\Vert .\Vert _{\beta }\) denotes either the \(\ell _1\)-norm, \(\ell _2\)-norm or the \(\ell _{\infty }\)-norm; \(\mathcal{Q }\) is a closed convex set and \(\mathcal{A }(.)\), \(\mathcal{C }(.)\), \(\mathcal{F }(.)\) are linear operators from \(\mathbb{R }^{m\times n}\) to vector spaces of appropriate dimensions. Basis pursuit, matrix completion, robust principal component pursuit (PCP), and stable PCP problems are all special cases of the composite norm minimization problem. Thus, FALC is able to solve all these problems in a unified manner. We show that any limit point of FALC iterate sequence is an optimal solution of the composite norm minimization problem. We also show that for all \(\epsilon >0\), the FALC iterates are \(\epsilon \)-feasible and \(\epsilon \)-optimal after \(\mathcal{O }(\log (\epsilon ^{-1}))\) iterations, which require \(\mathcal{O }(\epsilon ^{-1})\) constrained shrinkage operations and Euclidean projection onto the set \(\mathcal{Q }\). Surprisingly, on the problem sets we tested, FALC required only \(\mathcal{O }(\log (\epsilon ^{-1}))\) constrained shrinkage, instead of the \(\mathcal{O }(\epsilon ^{-1})\) worst case bound, to compute an \(\epsilon \)-feasible and \(\epsilon \)-optimal solution. To best of our knowledge, FALC is the first algorithm with a known complexity bound that solves the stable PCP problem.


Norm minimization Convex optimization Conic constraints Augmented Lagrangian method First order method  Iteration complexity \(\ell _1\)-Minimization Nuclear norm Basis pursuit Principal component pursuit  Sparse optimization  

Mathematics Subject Classification (2000)

90C25 90C06 90C22 49M29 90C90 65K05 


  1. 1.
    Aybat, N.S., Chakraborty, A.: Fast reconstruction of CT images from parsimonious angular measurements via compressed sensing. Technical report, Siemens Corporate Research (2009)Google Scholar
  2. 2.
    Aybat, N.S., Iyengar, G.: A first-order smoothed penalty method for compressed sensing. SIAM J. Optim. 21(1), 287–313 (2011)CrossRefMATHMathSciNetGoogle Scholar
  3. 3.
    Aybat, N.S., Iyengar, G.: A first-order augmented Lagrangian method for compressed sensing. SIAM J. Optim. 22(2), 429–459 (2012)CrossRefMATHMathSciNetGoogle Scholar
  4. 4.
    Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009)CrossRefMATHMathSciNetGoogle Scholar
  5. 5.
    Becker, S., Bobin, J., Candès, E.: Nesta: a fast and accurate first-order method for sparse recovery. SIAM J. Imaging Sci. 4, 1–39 (2011)CrossRefMATHMathSciNetGoogle Scholar
  6. 6.
    Cai, J., Candès, E., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20(4), 1956–1982 (2008)CrossRefGoogle Scholar
  7. 7.
    Candès, E., Romberg, J.: Quantitative robust uncertainty principles and optimally sparse decompositions. Found. Comput. Math. 6, 227–254 (2006)CrossRefMATHMathSciNetGoogle Scholar
  8. 8.
    Candès, E., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52, 489–509 (2006)CrossRefMATHGoogle Scholar
  9. 9.
    Candès, E., Tao, T.: Near optimal signal recovery from random projections: universal encoding strategies? IEEE Trans. Inf. Theory 52, 5406–5425 (2006)CrossRefGoogle Scholar
  10. 10.
    Candès, E.J., Li, X., Ma, Y., Wright, J.: Robust principle component analysis? (2009). Submitted for publicationGoogle Scholar
  11. 11.
    Cands, E.J., Recht, B.: Exact matrix completion via convex optimization. Found Comput Math 9, 717–772 (2008)CrossRefGoogle Scholar
  12. 12.
    d’Aspremont, A., Bach, F.R., Ghaoui, L.E.: Optimal solutions for sparse principle component analysis. J. Mach. Learn. Res. 9, 1269–1294 (2008)MATHMathSciNetGoogle Scholar
  13. 13.
    d’Aspremont, A., Ghaoui, L.E., Jordan, M.I., Lanckriet, G.R.G.: A direct formulation for sparse pca using semidefinite programming. SIAM Rev. 49, 434–448 (2007)CrossRefMATHMathSciNetGoogle Scholar
  14. 14.
    Daubechies, I., Fornasier, M., Loris, I.: Accelerated projected gradient method for linear inverse problems with sparsity constraints. J. Fourier Anal. Appl. 14, 764–792 (2008)CrossRefMATHMathSciNetGoogle Scholar
  15. 15.
    Donoho, D.: Compressed sensing. IEEE Trans. Inf. Theory 52, 1289–1306 (2006)CrossRefMathSciNetGoogle Scholar
  16. 16.
    El Ghaoui, L., Gahinet, P.: Rank minimization under lmi constraints: a framework for output feedback problems. In ; Proceedings of the European control conference (1993)Google Scholar
  17. 17.
    Fazel, M., Hindi, H., Boyd, S.: Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In: Proceedings of American control conference, Denver, Colorado (2003)Google Scholar
  18. 18.
    Fazel, M., Hindi, H., Boyd, S.: A rank minimization heuristic with application to minimum order system approximation. In: Proceedings of the American control conference, pp. 2156–2162 (2003)Google Scholar
  19. 19.
    Fazel, M., Hindi, H., Boyd, S.: Rank minimization and applications in system theory. In: American control conference, pp. 3273–3278 (2004)Google Scholar
  20. 20.
    Fazel, M., Pong, T.K., Sun, D., Tseng, P.: Hankel matrix rank minimization with applications in system identification and realization (2012). Submitted for publicationGoogle Scholar
  21. 21.
    Figueiredo, M.A., Nowak, R., Wright, S.J.: Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 1, 586–597 (2007)CrossRefGoogle Scholar
  22. 22.
    Goldfarb, D., Ma, S., Scheinberg, K.: Fast alternating linearization methods for minimizing the sum of two convex functions (2010). ArXiv:0912.4571v2Google Scholar
  23. 23.
    Hale, E.T., Yin, W., Zhang, Y.: A fixed-point continuation for \(\ell \) -regularized minimization with applications to compressed sensing. Rice University, Technical report (2007)Google Scholar
  24. 24.
    Hale, E.T., Yin, W., Zhang, Y.: Fixed-point continuation for \(\ell \) -minimization: methodology and convergence. SIAM J. Optim. 19, 1107–1130 (2008)CrossRefMATHMathSciNetGoogle Scholar
  25. 25.
    Journée, M., Nesterov, Y., Richtárik, P., Sepulchre, R.: Generalized power method for sparse principle component analysis. J. Mach. Learn. Res. 11, 517–553 (2010)MATHMathSciNetGoogle Scholar
  26. 26.
    Koh, K., Kim, S.J., Boyd, S.: Solver for \(\ell \) -regularized least squares problems. Stanford University, Technical report (2007)Google Scholar
  27. 27.
    Larsen, R.: Lanczos bidiagonalization with partial reorthogonalization. Technical report DAIMI PB-357, Department of Computer Science, Aarhus University (1998)Google Scholar
  28. 28.
    Lewis, A.S.: The convex analysis of unitarily invariant matrix norms. J. Convex Anal. 2, 173–183 (1995)MATHMathSciNetGoogle Scholar
  29. 29.
    Lin, Z., Chen, M., Wu, L., Ma, Y.: The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv:1009.5055v2 (2011)Google Scholar
  30. 30.
    Lin, Z., Ganesh, A., Wright, J., Wu, L., Chen, M., Ma, Y.: Fast convex optimization algorithms for exact recovery of a corrupted low-rank matrix. Technical report UIUC Technical Report UILU-ENG-09-2214 (2009)Google Scholar
  31. 31.
    Linial, N., London, E., Rabinovich, Y.: The geometry of graphs and some of its algorithmic applications. Combinatorica 15, 215–245 (1995)CrossRefMATHMathSciNetGoogle Scholar
  32. 32.
    Liu, Z., Vandenberghe, L.: Interior-point method for nuclear norm approximation with application to system identification. SIAM. J. Matrix Anal. Appl. 31, 1235–1256 (2009)CrossRefMathSciNetGoogle Scholar
  33. 33.
    Ma, S., Goldfarb, D., Chen, L.: Fixed point and bregman iterative methods for matrix rank minimization. Math. Program. Ser. A 128, 321–353 (2011)CrossRefMATHMathSciNetGoogle Scholar
  34. 34.
  35. 35.
    Recht, B., Fazel, M., Parrilo, P.: Guaranteed minimum rank solutions of matrix equations via nuclear norm minimization. SIAM Rev. 52, 471–501 (2010)CrossRefMATHMathSciNetGoogle Scholar
  36. 36.
    Toh, K., Yun, S.: An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems (2010). (Preprint)Google Scholar
  37. 37.
    Tseng, P.: On accelerated proximal gradient methods for convex-concave optimization. SIAM J. Optim. (2008) (submitted to)Google Scholar
  38. 38.
    Van den Berg, E., Friedlander, M.P.: Probing the pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 31, 890–912 (2008)CrossRefMATHMathSciNetGoogle Scholar
  39. 39.
    Wen, Z., Yin, W., Goldfarb, D., Zhang, Y.: A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization and continuation. SIAM J. Sci. Comput. (2009) (to appear)Google Scholar
  40. 40.
    Yang, J., Zhang, Y.: Alternating direction algorithms for l1-problems in compressive sensing. Technical Report TR09-37, CAAM, Rice University (2009)Google Scholar
  41. 41.
    Yin, W., Osher, S., Goldfarb, D., Darbon, J.: Bregman iterative algorithms for \(\ell _1\) minimization with applications to compressed sensing. SIAM J. Imaging Sci. 1, 143–168 (2008)CrossRefMATHMathSciNetGoogle Scholar
  42. 42.
    Zhou, Z., Li, X., Wright, J., Candès, E., Ma, Y.: Stable principle component pursuit. In: Proceedings of International Symposium on Information Theory (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg and Mathematical Optimization Society 2013

Authors and Affiliations

  1. 1.IE DepartmentThe Pennsylvania State UniversityUniversity ParkUSA
  2. 2.IEOR DepartmentColumbia UniversityNew YorkUSA

Personalised recommendations