Advertisement

On relaxation of some customized proximal point algorithms for convex minimization: from variational inequality perspective

Article
  • 25 Downloads

Abstract

The proximal point algorithm (PPA) is a fundamental method for convex programming. When applying the PPA to solve linearly constrained convex problems, we may prefer to choose an appropriate metric matrix to define the proximal regularization, so that the computational burden of the resulted PPA can be reduced, and sometimes even admit closed form or efficient solutions. This idea results in the so-called customized PPA (also known as preconditioned PPA), and it covers the linearized ALM, the primal-dual hybrid gradient algorithm, ADMM as special cases. Since each customized PPA owes special structures and has popular applications, it is interesting to ask wether we can design a simple relaxation strategy for these algorithms. In this paper we treat these customized PPA algorithms uniformly by a mixed variational inequality approach, and propose a new relaxation strategy for these customized PPA algorithms. Our idea is based on correcting the dual variables individually and does not rely on relaxing the primal variables. This is very different from previous works. From variational inequality perspective, we prove the global convergence and establish a worst-case convergence rate for these relaxed PPA algorithms. Finally, we demonstrate the performance improvements by some numerical results.

Keywords

Convex minimization Proximal point algorithm Relaxation Augmented Lagrangian method 

Notes

Acknowledgements

The author is grateful to the associate editor and two anonymous reviewers for their valuable comments and suggestions that have helped improve the presentation of this paper greatly. This work was supported by the NSFC Grants 11701564 and 11871029.

References

  1. 1.
    Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3, 1–122 (2010)CrossRefzbMATHGoogle Scholar
  3. 3.
    Cai, J.F., Candès, E.J., Shen, Z.W.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20, 1956–1982 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Cai, X.J., Gu, G.Y., He, B.S., Yuan, X.M.: A proximal point algorithm revisit on the alternating direction method of multipliers. Sci. China Math. 56, 2179–2186 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Chambolle, A., Pock, T.: Diagonal preconditioning for first order primal-dual algorithms in convex optimization. In: IEEE International Conference on Computer Vision (ICCV), pp. 1762–1769 (2011)Google Scholar
  6. 6.
    Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40, 120–145 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Chambolle, A., Pock, T.: On the ergodic convergence rates of a first-order primal-dual algorithm. Math. Program. 159, 253–287 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Chambolle, A., Pock, T.: An introduction to continuous optimization for imaging. Acta Numer. 25, 161–319 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Chan, T.F., Glowinski, R.: Finite Element Approximation and Iterative Solution of a Class of Mildly Nonlinear Elliptic Equations, Technical Report STAN-CS-78-674. Stanford University, Computer Science Department (1978)Google Scholar
  10. 10.
    Daubechies, I., Defrise, M., Mol, C.D.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57, 1413–1457 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Deng, W., Yin, W.T.: On the global and linear convergence of the generalized alternating direction method of multipliers. J. Sci. Comput. 66, 889–916 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Deng, W., Lai, M.J., Peng, Z.M., Yin, W.T.: Parallel multi-block ADMM with o(1/k) convergence. J. Sci. Comput. 71, 712–736 (2017)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Eckstein, J., Bertsekas, D.P.: On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55, 293–318 (1992)MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Esser, E.: Primal Dual Algorithms for Convex Models and Applications to Image Restoration, Registration and Nonlocal Inpainting. University of California, Los Angeles (2010)Google Scholar
  15. 15.
    Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, New York (2003)zbMATHGoogle Scholar
  16. 16.
    Glowinski, R., Marrocco, A.: Sur l’approximation, par éléments finis d’ordre un, et la résolution, par pénalisation-dualité d’une classe de problèmes de Dirichlet non linéaires. Revue Fr. Autom. Inform. Rech. Opér. Anal. Numér. 2, 41–76 (1975)zbMATHGoogle Scholar
  17. 17.
    Glowinski, R., Osher, S.J., Yin, W. (eds.): Splitting Methods in Communication, Imaging, Science, and Engineering. Springer, New York (2016)zbMATHGoogle Scholar
  18. 18.
    Goldstein, T., Li, M., Yuan, X.M.: Adaptive primal-dual splitting methods for statistical learning and image processing. In: Advances in Neural Information Processing Systems, pp. 2089–2097 (2015)Google Scholar
  19. 19.
    Gol’shtein, E.G., Tret’yakov, N.V.: Modified Lagrangians in convex programming and their generalizations. Math. Program. Stud. 10, 86–97 (1979)MathSciNetCrossRefzbMATHGoogle Scholar
  20. 20.
    Gu, G.Y., He, B.S., Yuan, X.M.: Customized proximal point algorithms for linearly constrained convex minimization and saddle-point problems: a unified approach. Comput. Optim. Appl. 59, 135–161 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    Han, D.R., He, H.J., Yang, H., Yuan, X.M.: A customized Douglas–Rachford splitting algorithm for separable convex minimization with linear constraints. Numer. Math. 127, 167–200 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  22. 22.
    He, B.S., Yuan, X.M.: A class of ADMM-based algorithms for three-block separable convex programming. Comput. Optim. Appl. 70, 791–826 (2018)Google Scholar
  23. 23.
    He, B.S.: PPA-Like contraction methods for convex optimization: a framework using variational inequality approach. J. Oper. Res. Soc. China 3, 391–420 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  24. 24.
    He, B.S., Yuan, X.M.: Convergence analysis of primal-dual algorithms for a saddle-point problem: from contraction perspective. SIAM J. Imaging Sci. 5, 119–149 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  25. 25.
    He, B.S., Yuan, X.M.: Block-wise alternating direction method of multipliers for multiple-block convex programming and beyond. SMAI J Comput. Math. 1, 145–174 (2015)MathSciNetCrossRefGoogle Scholar
  26. 26.
    He, B.S., Ma, F., Yuan, X.M.: An algorithmic gramework of heneralized primal-dual hybrid gradient methods for saddle point problems. J. Math. Imaging Vis. 58, 279–293 (2017)CrossRefzbMATHGoogle Scholar
  27. 27.
    Hestenes, M.R.: Multiplier and gradient methods. J. Optim. Theory Appl. 4, 303–320 (1969)MathSciNetCrossRefzbMATHGoogle Scholar
  28. 28.
    Larsen, R.M.: PROPACK Software for large and sparse SVD calculations. Stanford University. http://sun.stanford.edu/~rmunk/PROPACK/ (1969)
  29. 29.
    Martinet, B.: Regularisation, d’inéquations variationelles par approximations succesives. Rev. Fr. d’Inform. Rech. Oper. 4, 154–159 (1970)zbMATHGoogle Scholar
  30. 30.
    Powell, M.J.D.: A method for nonlinear constraints in minimization problems. In: Fletcher, R. (ed.) Optimization, pp. 283–298. Academic Press, New York (1969)Google Scholar
  31. 31.
    Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877–898 (1976)MathSciNetCrossRefzbMATHGoogle Scholar
  32. 32.
    Shen, Y., Wang, H.Y.: New augmented Lagrangian-based proximal point algorithm for convex optimization with equality constraints. J. Optim. Theory Appl. 171, 251–261 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  33. 33.
    Sra, S., Nowozin, S., Wright, S.J.: Optimization for Machine Learning. MIT Press, Cambridge (2012)Google Scholar
  34. 34.
    Wang, X.F., Hong, M.Y., Ma, S.Q., Luo, Z.Q.: Solving multiple-block separable convex minimization problems using two-block alternating direction method of multipliers. Pac. J. Optim. 11, 645–667 (2015)MathSciNetzbMATHGoogle Scholar
  35. 35.
    Yang, J.F., Yuan, X.M.: Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization. Math. Comput. 82, 301–329 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  36. 36.
    Yuan, X.M., Yang, J.F.: Sparse and low-rank matrix decomposition via alternating direction methods. Pac. J. Optim. 9, 167–180 (2013)MathSciNetzbMATHGoogle Scholar
  37. 37.
    Zhang, X.Q., Burger, M., Bresson, X., Osher, S.: Bregmanized nonlocal regularization for deconvolution and sparse reconstruction. SIAM J. Imaging Sci. 3, 253–276 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  38. 38.
    Zhang, X.Q., Burger, M., Osher, S.: A unified primal-dual algorithm framework based on Bregman iteration. J. Sci. Comput. 46, 20–46 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  39. 39.
    Zhu, M., Chan, T.F.: An efficient primal-dual hybrid gradient algorithm for total variation image restoration, CAM Report 08-34. UCLA, USA (2008)Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.High-Tech Institute of Xi’anXi’anChina

Personalised recommendations