Skip to main content
Log in

A Golden Ratio Primal–Dual Algorithm for Structured Convex Optimization

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

We design, analyze and test a golden ratio primal–dual algorithm (GRPDA) for solving structured convex optimization problem, where the objective function is the sum of two closed proper convex functions, one of which involves a composition with a linear transform. GRPDA preserves all the favorable features of the classical primal–dual algorithm (PDA), i.e., the primal and the dual variables are updated in a Gauss–Seidel manner, and the per iteration cost is dominated by the evaluation of the proximal point mappings of the two component functions and two matrix-vector multiplications. Compared with the classical PDA, which takes an extrapolation step, the novelty of GRPDA is that it is constructed based on a convex combination of essentially the whole iteration trajectory. We show that GRPDA converges within a broader range of parameters than the classical PDA, provided that the reciprocal of the convex combination parameter is bounded above by the golden ratio, which explains the name of the algorithm. An \(\mathcal {O}(1/N)\) ergodic convergence rate result is also established based on the primal–dual gap function, where N denotes the number of iterations. When either the primal or the dual problem is strongly convex, an accelerated GRPDA is constructed to improve the ergodic convergence rate from \(\mathcal {O}(1/N)\) to \(\mathcal {O}(1/N^2)\). Moreover, we show for regularized least-squares and linear equality constrained problems that the reciprocal of the convex combination parameter can be extended from the golden ratio to 2 and meanwhile a relaxation step can be taken. Our preliminary numerical results on LASSO, nonnegative least-squares and minimax matrix game problems, with comparisons to some state-of-the-art relative algorithms, demonstrate the efficiency of the proposed algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. An operator P is \(\alpha \)-averaged for some \(\alpha \in (0,1)\) if there exists a nonexpansive operator Q such that \(P = (1-\alpha )I +\alpha Q\).

  2. https://math.nist.gov/MatrixMarket/data/Harwell-Boeing/lsq/lsq.html.

References

  1. Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. CMS Books in Mathematics/Ouvrages de Mathématiques de la SMC, 2nd edn. Springer, Cham (2011)

    Book  Google Scholar 

  2. Beck, A.: First-Order Methods in Optimization. MOS-SIAM Series on Optimization. SIAM-Society for Industrial and Applied Mathematics, Philadelphia (2017)

    Book  Google Scholar 

  3. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)

    Article  MathSciNet  Google Scholar 

  4. Bertsekas, D.P., Gafni, E.M.: Projection methods for variational inequalities with application to the traffic assignment problem. Math. Program. Stud. 17, 139–159 (1982)

    Article  MathSciNet  Google Scholar 

  5. Bouwmans, T., Aybat, N.S., Zahzah, E.H.: Algorithms for Stable PCA. In: Bouwmans, T., Aybat, N.S., Zahzah, E. (eds.) Handbook of Robust Low-Rank and Sparse Matrix Decomposition: Applications in Image and Video Processing, chap. 2. CRC Press, Taylor and Francis Group (2016)

  6. Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2010)

    Article  Google Scholar 

  7. Chambolle, A., Ehrhardt, M.J., Richtarik, P., Schonlieb, C.-B.: Stochastic primal–dual hybrid gradient algorithm with arbitrary sampling and imaging application. SIAM J. Optim. 28(4), 2783–2808 (2018)

    Article  MathSciNet  Google Scholar 

  8. Chambolle, A., Pock, T.: A first-order primal–dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40(1), 120–145 (2011)

    Article  MathSciNet  Google Scholar 

  9. Chambolle, A., Pock, T.: On the ergodic convergence rates of a first-order primal–dual algorithm. Math. Program. 159(1–2), 253–287 (2016)

    Article  MathSciNet  Google Scholar 

  10. Chen, S., Saunders, M.A., Donoho, D.L.: Atomic decomposition by basis pursuit. SIAM Rev. 43(1), 129–159 (2001)

    Article  MathSciNet  Google Scholar 

  11. Donoho, D.L.: Compressed sensing. IEEE Trans. Inform. Theory 52(4), 1289–1306 (2006)

    Article  MathSciNet  Google Scholar 

  12. Duchi, J., Shalev-Shwartz, S., Singer, Y., Chandra, T.: Efficient projections onto the \(l_1\)-ball for learning in high dimensions. In: The 25th International Conference on Machine Learning, pp. 272–279 (2008)

  13. Esser, E., Zhang, X., Chan, T.F.: A general framework for a class of first order primal–dual algorithms for convex optimization in imaging science. SIAM J. Imaging Sci. 3(4), 1015–1046 (2010)

    Article  MathSciNet  Google Scholar 

  14. Fazel, M., Pong, T.K., Sun, D.F., Tseng, P.: Hankel matrix rank minimization with applications in system identification and realization. SIAM J. Matrix Anal. Appl. 34, 946–977 (2013)

    Article  MathSciNet  Google Scholar 

  15. Gabay, D., Mercier, B.: A dual algorithm for the solution of nonlinear variational problems via finite-element approximations. Comput. Math. Appl. 2, 17–40 (1976)

    Article  Google Scholar 

  16. Glowinski, R., Marrocco, A.: Sur l’approximation, par éléments finis d’ordre un, et la résolution, par pénalisation-dualité, d’une classe de problèmes de Dirichlet non linéaires. R.A.I.R.O., R2 9(R-2), 41–76 (1975)

  17. Hayden, S., Stanley, O.: A low patch-rank interpretation of texture. SIAM J. Imaging Sci. 6(1), 226–262 (2013)

    Article  MathSciNet  Google Scholar 

  18. He, B., Yuan, X.: Convergence analysis of primal–dual algorithms for a saddle-point problem: from contraction perspective. SIAM J. Imaging Sci. 5(1), 119–149 (2012)

    Article  MathSciNet  Google Scholar 

  19. Jonathan, E., Dimitri, P.: On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1–3), 293–318 (1992)

    MathSciNet  MATH  Google Scholar 

  20. Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16(6), 964–979 (1979)

    Article  MathSciNet  Google Scholar 

  21. Liu, Y., Xu, Y., Yin, W.: Acceleration of primal-dual methods by preconditioning and simple subproblem procedures. arXiv:1811.08937v2 (2018)

  22. Malitsky, Y.: Golden ratio algorithms for variational inequalities. Math. Program. (to appear)

  23. Malitsky, Y., Pock, T.: A first-order primal–dual algorithm with linesearch. SIAM J. Optim. 28(1), 411–432 (2018)

    Article  MathSciNet  Google Scholar 

  24. Monteiro, R.D.C., Svaiter, B.F.: Complexity of variants of Tseng’s modified FB splitting and Korpelevich’s methods for hemivariational inequalities with applications to saddle-point and convex optimization problems. SIAM J. Optim. 21(4), 1688–1720 (2011)

    Article  MathSciNet  Google Scholar 

  25. Nedic, A., Ozdaglar, A.: Subgradient methods for saddle-point problems. J. Optim. Theory Appl. 142(1), 205–228 (2009)

    Article  MathSciNet  Google Scholar 

  26. Needell, D., Ward, R.: Near-optimal compressed sensing guarantees for anisotropic and isotropic total variation minimization. IEEE Trans. Image Process. 22(10), 3941–3949 (2013)

    Article  MathSciNet  Google Scholar 

  27. Nemirovski, A.: Prox-method with rate of convergence O(1/t) for variational inequalities with Lipschitz continuous monotone operators and smooth convex–concave saddle point problems. SIAM J. Optim. 15(1), 229–251 (2006)

    Article  MathSciNet  Google Scholar 

  28. Paige, C.C., Saunders, M.A.: Lsqr: an algorithm for sparselinear equations and sparse least squares. ACM Trans. Math. Soft. 8, 43–71 (1982)

    Article  Google Scholar 

  29. Pock, T., Chambolle, A.: Diagonal preconditioning for first order primal-dual algorithms in convex optimization. In: IEEE International Conference on Computer Vision, pp. 1762–1769 (2011)

  30. Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)

    Book  Google Scholar 

  31. Shefi, R., Teboulle, M.: Rate of convergence analysis of decomposition methods based on the proximal method of multipliers for convex minimization. SIAM J. Optim. 24(1), 269–297 (2014)

    Article  MathSciNet  Google Scholar 

  32. Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. 58(1), 267–288 (1996)

    MathSciNet  MATH  Google Scholar 

  33. Uzawa, H.: Iterative methods for concave programming. In: Arrow, K.J., Hurwicz, L., Uzawa, H. (eds.) Studies in Linear and Nonlinear Programming. Stanford University Press, Stanford, CA (1958)

    MATH  Google Scholar 

  34. Yang, J., Zhang, Y.: Alternating direction algorithms for \(\ell _1\)-problems in compressive sensing. SIAM J. Sci. Comput. 33(1), 250–278 (2011)

    Article  MathSciNet  Google Scholar 

  35. Zhu, M., Chan, T.F.: An efficient primal-dual hybrid gradient algorithm for total variation image restoration. In: CAM Report 08-34, UCLA, Los Angeles, CA (2008)

Download references

Acknowledgements

Junfeng Yang was supported by the NSFC Grants 11922111 and 11771208. The research of Xiaokai Chang was supported by the Innovation Ability Improvement Project of Gansu (Grant No. 2020A022) and the Hongliu Foundation of Firstclass Disciplines of Lanzhou University of Technology, China.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Junfeng Yang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chang, X., Yang, J. A Golden Ratio Primal–Dual Algorithm for Structured Convex Optimization. J Sci Comput 87, 47 (2021). https://doi.org/10.1007/s10915-021-01452-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10915-021-01452-9

Keywords

Mathematics Subject Classification

Navigation