Advertisement

Journal of Scientific Computing

, Volume 54, Issue 2–3, pp 428–453 | Cite as

Accelerated Linearized Bregman Method

  • Bo Huang
  • Shiqian Ma
  • Donald Goldfarb
Article

Abstract

In this paper, we propose and analyze an accelerated linearized Bregman (ALB) method for solving the basis pursuit and related sparse optimization problems. This accelerated algorithm is based on the fact that the linearized Bregman (LB) algorithm first proposed by Stanley Osher and his collaborators is equivalent to a gradient descent method applied to a certain dual formulation. We show that the LB method requires O(1/ϵ) iterations to obtain an ϵ-optimal solution and the ALB algorithm reduces this iteration complexity to \(O(1/\sqrt{\epsilon})\) while requiring almost the same computational effort on each iteration. Numerical results on compressed sensing and matrix completion problems are presented that demonstrate that the ALB method can be significantly faster than the LB method.

Keywords

Convex optimization Linearized Bregman method Accelerated linearized Bregman method Compressed sensing Basis pursuit Matrix completion 

Notes

Acknowledgements

We would like to thank Wotao Yin for fruitful discussions and the anonymous referees for making several very helpful suggestions.

References

  1. 1.
    Barzilai, J., Borwein, J.: Two point step size gradient methods. IMA J. Numer. Anal. 8, 141–148 (1988) MathSciNetMATHCrossRefGoogle Scholar
  2. 2.
    Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009) MathSciNetMATHCrossRefGoogle Scholar
  3. 3.
    Becker, S., Candès, E.J., Grant, M.: Templates for convex cone problems with applications to sparse signal recovery. Math. Program. Comput. 3, 165–218 (2011) MathSciNetMATHCrossRefGoogle Scholar
  4. 4.
    Bertsekas, D.P., Tsitsiklis, J.N.: Parallel and Distributed Computation: Numerical Methods. Prentice-Hall, Upper Saddle River (1989) MATHGoogle Scholar
  5. 5.
    Bregman, L.: The relaxation method of finding the common points of convex sets and its application to the solution of problems in convex programming. U.S.S.R. Comput. Math. Math. Phys. 7, 200–217 (1967) CrossRefGoogle Scholar
  6. 6.
    Cai, J., Candès, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20, 1956–1982 (2010) MathSciNetMATHCrossRefGoogle Scholar
  7. 7.
    Cai, J.-F., Osher, S., Shen, Z.: Linearized Bregman iterations for compressed sensing. Math. Comput. 78, 1515–1536 (2009) MathSciNetMATHCrossRefGoogle Scholar
  8. 8.
    Cai, J.-F., Osher, S., Shen, Z.: Convergence of the linearized Bregman iteration for 1-norm minimization. Math. Comput. 78, 2127–2136 (2009) MathSciNetMATHCrossRefGoogle Scholar
  9. 9.
    Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9, 717–772 (2009) MathSciNetMATHCrossRefGoogle Scholar
  10. 10.
    Candès, E.J., Tao, T.: The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inf. Theory 56, 2053–2080 (2009) CrossRefGoogle Scholar
  11. 11.
    Candès, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52, 489–509 (2006) MATHCrossRefGoogle Scholar
  12. 12.
    Donoho, D.: Compressed sensing. IEEE Trans. Inf. Theory 52, 1289–1306 (2006) MathSciNetCrossRefGoogle Scholar
  13. 13.
    Friedlander, M.P., Tseng, P.: Exact regularization of convex programs. SIAM J. Optim. 18, 1326–1350 (2007) MathSciNetMATHCrossRefGoogle Scholar
  14. 14.
    Goldfarb, D., Ma, S.: Fast multiple splitting algorithms for convex optimization. Tech. report, Department of IEOR, Columbia University (2009). Preprint, available at http://arxiv.org/abs/0912.4570
  15. 15.
    Goldfarb, D., Ma, S.: Convergence of fixed point continuation algorithms for matrix rank minimization. Found. Comput. Math. 11, 183–210 (2011) MathSciNetMATHCrossRefGoogle Scholar
  16. 16.
    Goldfarb, D., Ma, S., Scheinberg, K.: Fast alternating linearization methods for minimizing the sum of two convex functions. Math. Program. Ser. A (2012). doi: 10.1007/s10107-012-0530-2 Google Scholar
  17. 17.
    Goldfarb, D., Scheinberg, K.: Fast first-order methods for composite convex optimization with line search. Technical Report, Department of IEOR, Columbia University (2011) Google Scholar
  18. 18.
    Gross, D.: Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. Inf. Theory 57, 1548–1566 (2011) CrossRefGoogle Scholar
  19. 19.
    Hale, E.T., Yin, W., Zhang, Y.: Fixed-point continuation for 1-minimization: methodology and convergence. SIAM J. Optim. 19, 1107–1130 (2008) MathSciNetMATHCrossRefGoogle Scholar
  20. 20.
    Hestenes, M.R.: Multiplier and gradient methods. J. Optim. Theory Appl. 4, 303–320 (1969) MathSciNetMATHCrossRefGoogle Scholar
  21. 21.
    Liu, D.C., Nocedal, J.: On the limited memory BFGS method for large scale optimization. Math. Program., Ser. B 45, 503–528 (1989) MathSciNetMATHCrossRefGoogle Scholar
  22. 22.
    Liu, Y., Sun, D., Toh, K.-C.: An implementable proximal point algorithmic framework for nuclear norm minimization. Math. Program. (2011). doi: 10.1007/s10107-010-0437-8 Google Scholar
  23. 23.
    Ma, S., Goldfarb, D., Chen, L.: Fixed point and Bregman iterative methods for matrix rank minimization. Math. Program., Ser. A 128, 321–353 (2011) MathSciNetMATHCrossRefGoogle Scholar
  24. 24.
    Mangasarian, O.L., Meyer, R.R.: Nonlinear perturbation of linear programs. SIAM J. Control Optim. 17, 745–752 (1979) MathSciNetMATHCrossRefGoogle Scholar
  25. 25.
    Natarajan, B.K.: Sparse approximate solutions to linear systems. SIAM J. Comput. 24, 227–234 (1995) MathSciNetMATHCrossRefGoogle Scholar
  26. 26.
    Nemirovski, A.: Prox-method with rate of convergence O(1/t) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM J. Optim. 15, 229–251 (2005) MathSciNetCrossRefGoogle Scholar
  27. 27.
    Nesterov, Y.E.: A method for unconstrained convex minimization problem with the rate of convergence \(\mathcal{O}(1/k^{2})\). Dokl. Akad. Nauk SSSR 269, 543–547 (1983) MathSciNetGoogle Scholar
  28. 28.
    Nesterov, Y.E.: In: Introductory Lectures on Convex Optimization, vol. 87, p. xviii+236 (2004). A basic course Google Scholar
  29. 29.
    Nesterov, Y.E.: Smooth minimization for non-smooth functions. Math. Program., Ser. A 103, 127–152 (2005) MathSciNetMATHCrossRefGoogle Scholar
  30. 30.
    Nesterov, Y.E.: Gradient methods for minimizing composite objective function. CORE Discussion Paper 2007/76 (2007) Google Scholar
  31. 31.
    Osher, S., Burger, M., Goldfarb, D., Xu, J., Yin, W.: An iterative regularization method for total variation-based image restoration. Multiscale Model. Simul. 4, 460–489 (2005) MathSciNetMATHCrossRefGoogle Scholar
  32. 32.
    Osher, S., Mao, Y., Dong, B., Yin, W.: Fast linearized Bregman iteration for compressive sensing and sparse denoising. Commun. Math. Sci. 8, 93–111 (2010) MathSciNetMATHGoogle Scholar
  33. 33.
    Powell, M.J.D.: A method for nonlinear constraints in minimization problems. In: Fletcher, R. (ed.) Optimization, pp. 283–298. Academic Press, New York (1972) Google Scholar
  34. 34.
    Recht, B.: A simpler approach to matrix completion. J. Mach. Learn. Res. 12, 3413–3430 (2011) MathSciNetGoogle Scholar
  35. 35.
    Recht, B., Fazel, M., Parrilo, P.: Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 52, 471–501 (2010) MathSciNetMATHCrossRefGoogle Scholar
  36. 36.
    Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970) MATHGoogle Scholar
  37. 37.
    Rockafellar, R.T.: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1, 97–116 (1976) MathSciNetMATHCrossRefGoogle Scholar
  38. 38.
    ACM SIGKDD and Netflix: In: Proceedings of kdd Cup and Workshop, Proceedings available online at http://www.cs.uic.edu/~liub/KDD-cup-2007/proceedings.html (2012)
  39. 39.
    Srebro, N.: Learning with matrix factorizations. PhD thesis, Massachusetts Institute of Technology (2004) Google Scholar
  40. 40.
    Srebro, N., Jaakkola, T.: Weighted low-rank approximations. In: Proceedings of the Twentieth International Conference on Machine Learning (ICML-2003) (2003) Google Scholar
  41. 41.
    Toh, K.-C., Yun, S.: An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems. Pac. J. Optim. 6, 615–640 (2010) MathSciNetMATHGoogle Scholar
  42. 42.
    Tseng, P.: On accelerated proximal gradient methods for convex-concave optimization. SIAM J. Optim. (2008, submitted) Google Scholar
  43. 43.
    van den Berg, E., Friedlander, M.P.: Probing the Pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 31, 890–912 (2008) MathSciNetMATHCrossRefGoogle Scholar
  44. 44.
    Wen, Z., Yin, W., Zhang, Y.: Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. Technical Report, Department of CAAM, Rice University (2010) Google Scholar
  45. 45.
    Yin, W.: Analysis and generalizations of the linearized Bregman method. SIAM J. Imaging Sci. 3, 856–877 (2010) MathSciNetMATHCrossRefGoogle Scholar
  46. 46.
    Yin, W., Osher, S., Goldfarb, D., Darbon, J.: Bregman iterative algorithms for 1-minimization with applications to compressed sensing. SIAM J. Imaging Sci. 1, 143–168 (2008) MathSciNetMATHCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2012

Authors and Affiliations

  1. 1.Department of Industrial Engineering and Operations ResearchColumbia UniversityNew YorkUSA
  2. 2.Institute for Mathematics and Its ApplicationsUniversity of MinnesotaMinneapolisUSA

Personalised recommendations