Skip to main content

Accelerated Algorithms for Constrained Convex Optimization

  • Chapter
  • First Online:
Accelerated Optimization for Machine Learning

Abstract

This chapter reviews the representative accelerated algorithms for deterministic constrained convex optimization. We overview the accelerated penalty method, accelerated Lagrange multiplier method, and the accelerated augmented Lagrange multiplier method. In particular, we concentrate on two widely used algorithms, namely the alternating direction method of multiplier (ADMM) and the primal-dual method. For ADMM, we study four scenarios, namely the generally convex and nonsmooth case, the strongly convex and nonsmooth case, the generally convex and smooth case, and the strongly convex and smooth case. We also introduce its non-ergodic accelerated variant. For the primal-dual method, we study three scenarios: both the two functions are generally convex, both are strongly convex, and one is generally convex, while the other is strongly convex. Finally, we introduce the Frank–Wolfe algorithm under the condition of strongly convex constraint set.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The four cases in Sects. 3.4.13.4.4 assume different conditions. They are not accelerated and cannot compare with each other. However, the convergence rate in Sect. 3.4.5 is truly accelerated.

  2. 2.

    In fact, the faster rate is due to the stronger assumption, i.e., the strong convexity of g, rather than the acceleration technique.

  3. 3.

    Similar to Sect. 3.4.2, we have the faster rate due to the stronger assumption, rather than the acceleration technique.

References

  1. D.P. Bertsekas, Nonlinear Programming, 2nd edn. (Athena Scientific, Belmont, MA, 1999)

    Google Scholar 

  2. S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)

    Article  Google Scholar 

  3. A. Chambolle, T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imag. Vis. 40(1), 120–145 (2011)

    Article  MathSciNet  Google Scholar 

  4. A. Chambolle, T. Pock, On the ergodic convergence rates of a first-order primal-dual algorithm. Math. Program. 159(1–2), 253–287 (2016)

    Article  MathSciNet  Google Scholar 

  5. Y. Chen, G. Lan, Y. Ouyang, Optimal primal-dual methods for a class of saddle point problems. SIAM J. Optim. 24(4), 1779–1814 (2014)

    Article  MathSciNet  Google Scholar 

  6. D. Davis, W. Yin, Convergence rate analysis of several splitting schemes, in Splitting Methods in Communication, Imaging, Science, and Engineering (Springer, New York, 2016), pp. 115–163

    Book  Google Scholar 

  7. E. Esser, X. Zhang, T.F. Chan, A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science. SIAM J. Imag. Sci. 3(4), 1015–1046 (2010)

    Article  MathSciNet  Google Scholar 

  8. M. Frank, P. Wolfe, An algorithm for quadratic programming. Nav. Res. Logist. Q. 3(1–2), 95–110 (1956)

    Article  MathSciNet  Google Scholar 

  9. D. Garber, E. Hazan, Faster rates for the Frank-Wolfe method over strongly-convex sets, in Proceedings of the 32nd International Conference on Machine Learning, Lille, (2015), pp. 541–549

    Google Scholar 

  10. P. Giselsson, S. Boyd, Linear convergence and metric selection in Douglas Rachford splitting and ADMM. IEEE Trans. Automat. Contr. 62(2), 532–544 (2017)

    Article  Google Scholar 

  11. B. He, X. Yuan, On the acceleration of augmented Lagrangian method for linearly constrained optimization (2010). Preprint. http://www.optimization-online.org/DB_FILE/2010/10/2760.pdf

  12. B. He, X. Yuan, On the O(1∕t) convergence rate of the Douglas-Rachford alternating direction method. SIAM J. Numer. Anal. 50(2), 700–709 (2012)

    Article  MathSciNet  Google Scholar 

  13. B. He, X. Yuan, On non-ergodic convergence rate of Douglas-Rachford alternating directions method of multipliers. Numer. Math. 130(3), 567–577 (2015)

    Article  MathSciNet  Google Scholar 

  14. B. He, L.-Z. Liao, D. Han, H. Yang, A new inexact alternating directions method for monotone variational inequalities. Math. Program. 92(1), 103–118 (2002)

    Article  MathSciNet  Google Scholar 

  15. M. Jaggi, Revisiting Frank-Wolfe: projection free sparse convex optimization, in Proceedings of the 31th International Conference on Machine Learning, Atlanta, (2013), pp. 427–435

    Google Scholar 

  16. M. Jaggi, M. Sulovsk, A simple algorithm for nuclear norm regularized problems, in Proceedings of the 27th International Conference on Machine Learning, Haifa, (2010), pp. 471–478

    Google Scholar 

  17. G. Lan, R.D. Monteiro, Iteration-complexity of first-order penalty methods for convex programming. Math. Program. 138(1–2), 115–139 (2013)

    Article  MathSciNet  Google Scholar 

  18. H. Li, Z. Lin, On the complexity analysis of the primal solutions for the accelerated randomized dual coordinate ascent. J. Mach. Learn. Res. (2020). http://jmlr.org/papers/v21/18-425.html

  19. H. Li, Z. Lin, Accelerated alternating direction method of multipliers: an optimal O(1∕K) nonergodic analysis. J. Sci. Comput. 79(2), 671–699 (2019)

    Article  MathSciNet  Google Scholar 

  20. H. Li, C. Fang, Z. Lin, Convergence rates analysis of the quadratic penalty method and its applications to decentralized distributed optimization (2017). Preprint. arXiv:1711.10802

    Google Scholar 

  21. Z. Lin, M. Chen, Y. Ma, The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices (2010). Preprint. arXiv:1009.5055

    Google Scholar 

  22. Z. Lin, R. Liu, H. Li, Linearized alternating direction method with parallel splitting and adaptive penalty for separable convex programs in machine learning. Mach. Learn. 99(2), 287–325 (2015)

    Article  MathSciNet  Google Scholar 

  23. J. Lu, M. Johansson, Convergence analysis of approximate primal solutions in dual first-order methods. SIAM J. Optim. 26(4), 2430–2467 (2016)

    Article  MathSciNet  Google Scholar 

  24. C. Lu, H. Li, Z. Lin, S. Yan, Fast proximal linearized alternating direction method of multiplier with parallel splitting, in Proceedings of the 30th AAAI Conference on Artificial Intelligence, Phoenix, (2016), pp. 739–745

    Google Scholar 

  25. D.G. Luenberger, Convergence rate of a penalty-function scheme. J. Optim. Theory Appl. 7(1), 39–51 (1971)

    Article  MathSciNet  Google Scholar 

  26. I. Necoara, V. Nedelcu, Rate analysis of inexact dual first-order methods application to dual decomposition. IEEE Trans. Automat. Contr. 59(5), 1232–1243 (2014)

    Article  MathSciNet  Google Scholar 

  27. I. Necoara, A. Patrascu, Iteration complexity analysis of dual first-order methods for conic convex programming. Optim. Methods Softw. 31(3), 645–678 (2016)

    Article  MathSciNet  Google Scholar 

  28. I. Necoara, A. Patrascu, F. Glineur, Complexity of first-order inexact Lagrangian and penalty methods for conic convex programming. Optim. Methods Softw. 34(2), 305–335 (2019)

    Article  MathSciNet  Google Scholar 

  29. V.H. Nguyen, J.-J. Strodiot, Convergence rate results for a penalty function method, in Optimization Techniques (Springer, New York, 1978), pp. 101–106

    Google Scholar 

  30. Y. Ouyang, Y. Chen, G. Lan, E. Pasiliao Jr., An accelerated linearized alternating direction method of multipliers. SIAM J. Imag. Sci. 8(1), 644–681 (2015)

    Article  MathSciNet  Google Scholar 

  31. P. Patrinos, A. Bemporad, An accelerated dual gradient projection algorithm for embedded linear model predictive control. IEEE Trans. Automat. Contr. 59(1), 18–33 (2013)

    Article  MathSciNet  Google Scholar 

  32. T. Pock, D. Cremers, H. Bischof, A. Chambolle, An algorithm for minimizing the Mumford-Shah functional, in Proceedings of the 12th International Conference on Computer Vision, Kyoto, (2009), pp. 1133–1140

    Google Scholar 

  33. B.T. Polyak, The convergence rate of the penalty function method. USSR Comput. Math. Math. Phys. 11(1), 1–12 (1971)

    Article  Google Scholar 

  34. R. Shefi, M. Teboulle, Rate of convergence analysis of decomposition methods based on the proximal method of multipliers for convex minimization. SIAM J. Optim. 24(1), 269–297 (2014)

    Article  MathSciNet  Google Scholar 

  35. W. Tian, X. Yuan, An alternating direction method of multipliers with a worst-case o(1∕n 2) convergence rate. Math. Comput. 88(318), 1685–1713 (2019)

    Article  MathSciNet  Google Scholar 

  36. P. Tseng, On accelerated proximal gradient methods for convex-concave optimization. Technical report, University of Washington, Seattle (2008)

    Google Scholar 

  37. X. Wang, X. Yuan, The linearized alternating direction method of multipliers for Dantzig selector. SIAM J. Sci. Comput. 34(5), A2792–A2811 (2012)

    Article  MathSciNet  Google Scholar 

  38. Y. Xu, Accelerated first-order primal-dual proximal methods for linearly constrained composite convex programming. SIAM J. Optim. 27(3), 1459–1484 (2017)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Lin, Z., Li, H., Fang, C. (2020). Accelerated Algorithms for Constrained Convex Optimization. In: Accelerated Optimization for Machine Learning . Springer, Singapore. https://doi.org/10.1007/978-981-15-2910-8_3

Download citation

  • DOI: https://doi.org/10.1007/978-981-15-2910-8_3

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-15-2909-2

  • Online ISBN: 978-981-15-2910-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics