Skip to main content

Convex Minimization Algorithms

  • Chapter
  • First Online:
Optimization

Part of the book series: Springer Texts in Statistics ((STS,volume 95))

  • 12k Accesses

Abstract

This chapter delves into three advanced algorithms for convex minimization. The projected gradient algorithm is useful in minimizing a strictly convex quadratic over a closed convex set. Although the algorithm extends to more general convex functions, the best theoretical results are available in this limited setting. We rely on the MM principle to motivate and extend the algorithm. The connections to Dykstra’s algorithm and the contraction mapping principle add to the charm of the subject. On the minus side of the ledger, the projected gradient method can be very slow to converge. This defect is partially offset by ease of coding in many problems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bertsekas DP (1999) Nonlinear programming, 2nd edn. Athena Scientific, Belmont

    MATH  Google Scholar 

  2. Bregman LM (1965) The method of successive projection for finding a common point of convex sets. Sov Math Dokl 6:688–692

    MATH  Google Scholar 

  3. Bregman LM (1967) The relaxation method of finding the common points of convex sets and its application to the solution of problems in convex programming. USSR Comput Math Math Phy 7:200–217

    Article  Google Scholar 

  4. Brophy JF, Smith PW (1988) Prototyping Karmarkar’s algorithm using MATH-PROTRAN. IMSL Dir 5:2–3

    Google Scholar 

  5. Candés EJ, Romberg J, Tao T (2006) Stable signal recovery from incomplete and inaccurate measurements. Comm Pure Appl Math 59:1207–1223

    Article  MathSciNet  MATH  Google Scholar 

  6. Cheney W (2001) Analysis for applied mathematics. Springer, New York

    MATH  Google Scholar 

  7. Conte SD, deBoor C (1972) Elementary numerical analysis. McGraw- Hill, New York

    MATH  Google Scholar 

  8. Davis JA, Smith TW (2008) General social surveys, 1972–2008 [machine-readable data le]. Roper Center for Public Opinion Research, University of Connecticut, Storrs

    Google Scholar 

  9. Donoho DL (2006) Compressed sensing. IEEE Trans Inform Theor 52:1289–1306

    Article  MathSciNet  Google Scholar 

  10. Goldstein T, Osher S (2009) The split Bregman method for ℓ 1-regularized problems. SIAM J Imag Sci 2:323–343

    Article  MathSciNet  MATH  Google Scholar 

  11. He L, Marquina A, Osher S (2005) Blind deconvolution using TV regularization and Bregman iteration. Int J Imag Syst Technol 15, 74–83

    Article  Google Scholar 

  12. Jia R-Q, Zhao H, Zhao W (2009) Convergence analysis of the Bregman method for the variational model of image denoising. Appl Comput Harmon Anal 27:367–379

    Article  MathSciNet  MATH  Google Scholar 

  13. Hestenes MR, Karush WE (1951) A method of gradients for the calculation of the characteristic roots and vectors of a real symmetric matrix. J Res Natl Bur Stand 47:471–478

    Article  MathSciNet  Google Scholar 

  14. Osher S, Burger M, Goldfarb D, Xu J, Yin W (2005) An iterative regularization method for total variation based image restoration. Multiscale Model Simul 4:460–489

    Article  MathSciNet  MATH  Google Scholar 

  15. Osher S, Mao T, Dong B, Yin W (2011) Fast linearized Bregman iteration for compressive sensing and sparse denoising. Comm Math Sci 8:93–111

    MathSciNet  Google Scholar 

  16. Robertson T, Wright FT, Dykstra RL (1988) Order restricted statistical inference. Wiley, Hoboken

    MATH  Google Scholar 

  17. Rudin LI, Osher S, Fatemi E (1992) Nonlinear total variation based noise removal algorithms. Physica D 60:259–268

    Article  MATH  Google Scholar 

  18. Ruszczyński A (2006) Nonlinear optimization. Princeton University Press, Princeton

    MATH  Google Scholar 

  19. Silvapulle MJ, Sen PK (2005) Constrained statistical inference. Wiley, Hoboken

    MATH  Google Scholar 

  20. Tibshirani R, Saunders M, Rosset S, Zhu J, Knight K (2005) Sparsity and smoothness via the fused lasso. J Roy Stat Soc B 67:91–108

    Article  MathSciNet  MATH  Google Scholar 

  21. Yin W, Osher S, Goldfarb D, Darbon J (2008) Bregman iterative algorithms for ℓ 1-minimization with applications to compressed sensing. SIAM J Imag Sci 1:143–168

    Article  MathSciNet  MATH  Google Scholar 

  22. Zhang Z, Lange K, Ophoff R, Sabatti C (2010) Reconstructing DNA copy number by penalized estimation and imputation. Ann Appl Stat 4:1749–1773

    Article  MathSciNet  MATH  Google Scholar 

  23. Zhou H, Lange K (2012) A path algorithm for constrained estimation. J Comput Graph Stat DOI 10.1080/10618600.2012.681248

    Google Scholar 

  24. Zhou H, Lange K (2012) Path following in the exact penalty method of convex programming (submitted)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer Science+Business Media New York

About this chapter

Cite this chapter

Lange, K. (2013). Convex Minimization Algorithms. In: Optimization. Springer Texts in Statistics, vol 95. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-5838-8_16

Download citation

Publish with us

Policies and ethics