Skip to main content
Log in

Convergence rates with inexact non-expansive operators

  • Full Length Paper
  • Series A
  • Published:
Mathematical Programming Submit manuscript

Abstract

In this paper, we present a convergence rate analysis for the inexact Krasnosel’skiĭ–Mann iteration built from non-expansive operators. The presented results include two main parts: we first establish the global pointwise and ergodic iteration-complexity bounds; then, under a metric sub-regularity assumption, we establish a local linear convergence for the distance of the iterates to the set of fixed points. The obtained results can be applied to analyze the convergence rate of various monotone operator splitting methods in the literature, including the Forward–Backward splitting, the Generalized Forward–Backward, the Douglas–Rachford splitting, alternating direction method of multipliers and Primal–Dual splitting methods. For these methods, we also develop easily verifiable termination criteria for finding an approximate solution, which can be seen as a generalization of the termination criterion for the classical gradient descent method. We finally develop a parallel analysis for the non-stationary Krasnosel’skiĭ–Mann iteration.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. In fact, in many cases the fixed-point operator T is even \(\alpha \)-averaged, see Definition 1.

  2. The authors consider the case where \(\mathcal {H}\) is any normed space.

  3. One may observe that \(\gamma \) can be modified locally, once the iterates enter the appropriate neighbourhood, to get the usual optimal linear rate of gradient descent with a strongly convex objective [57]. But this is not our aim here.

  4. Let \(\left( x_k,v_k\right) \in \text {gra}A\) be a sequence generated by an iterative method for solving the monotone inclusion problem \(0 \in A x\), then the non-uniform iteration-complexity bound means that for every \(k\in \mathbb {N}\), there exists a \(j\le k\) such that \(\Vert v_j\Vert =O(1/\sqrt{k})\).

References

  1. Aragòn, F.J., Geoffroy, M.H.: Characterization of metric regularity of subdifferentials. J. Convex Anal. 15(2), 365–380 (2008)

    MathSciNet  MATH  Google Scholar 

  2. Baillon, J.B.: Un théoreme de type ergodique pour les contractions non linéaires dans un espace de Hilbert. C.R. Acad. Sci. Paris Sér. AB 280, 1511–1514 (1975)

    MathSciNet  MATH  Google Scholar 

  3. Baillon, J.B., Bruck, R.E.: The rate of asymptotic regularity is \({O}(1/\sqrt{n})\). Lect. Notes Pure Appl. Math. 178, 51–81 (1996)

    MathSciNet  MATH  Google Scholar 

  4. Baillon, J.B., Haddad, G.: Quelques propriétés des opérateurs angle-bornés etn-cycliquement monotones. Israel J. Math. 26(2), 137–150 (1977)

    Article  MathSciNet  MATH  Google Scholar 

  5. Bauschke, H.H., Bello Cruz, J.Y., Nghia, T.A., Phan, H.M., Wang, X.: Optimal rates of convergence of matrices with applications (2014). arXiv:1407.0671

  6. Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, New York (2011)

    Book  MATH  Google Scholar 

  7. Bauschke, H.H., Luke, D.R., Phan, H.M., Wang, X.: Restricted normal cones and the method of alternating projections: theory. Set-Valued Var. Anal. 21(3), 431–473 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  8. Bauschke, H.H., Noll, D., Phan, H.M.: Linear and strong convergence of algorithms involving averaged nonexpansive operators. J. Math. Anal. Appl. 421(1), 1–20 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  9. Borwein, J.M., Sims, B.: The Douglas–Rachford algorithm in the absence of convexity. In: Fixed-Point Algorithms for Inverse Problems in Science and Engineering. Springer, pp. 93–109 (2011)

  10. Brézis, H., Lions, P.L.: Produits infinis de résolvantes. Israel J. Math. 29(4), 329–345 (1978)

    Article  MathSciNet  MATH  Google Scholar 

  11. Briceno-Arias, L.M.: Forward-Douglas–Rachford splitting and forward-partial inverse method for solving monotone inclusions. Optimization 64(5), 1239–1261 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  12. Briceno-Arias, L.M., Combettes, P.L.: A monotone+ skew splitting model for composite monotone inclusions in duality. SIAM J. Optim. 21(4), 1230–1250 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  13. Burachik, R.S., Scheimberg, S., Svaiter, B.F.: Robustness of the hybrid extragradient proximal-point algorithm. J. Optim. Theory Appl. 111(1), 117–136 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  14. Chambolle, A., Pock, T.: A first-order primal–dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40(1), 120–145 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  15. Chen, G., Teboulle, M.: A proximal-based decomposition method for convex minimization problems. Math. Program. 64(1–3), 81–101 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  16. Chen, P., Huang, J., Zhang, X.: A primal–dual fixed point algorithm for convex separable minimization with applications to image restoration. Inverse Prob. 29(2), 025011 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  17. Combettes, P.L.: Quasi-Fejérian analysis of some optimization algorithms. Stud. Comput. Math. 8, 115–152 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  18. Combettes, P.L.: Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 53(5–6), 475–504 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  19. Combettes, P.L., Pesquet, J.C.: Primal–dual splitting algorithm for solving inclusions with mixtures of composite, Lipschitzian, and parallel-sum type monotone operators. Set-Valued Var. Anal. 20(2), 307–330 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  20. Combettes, P.L., Wajs, V.R.: Signal recovery by proximal Forward–Backward splitting. Multiscale Model. Simul. 4(4), 1168–1200 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  21. Cominetti, R., Soto, J.A., Vaisman, J.: On the rate of convergence of Krasnoselski–Mann iterations and their connection with sums of Bernoullis. Israel J. Math. 199(2), 757–772 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  22. Condat, L.: A primal–dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms. J. Optim. Theory Appl. 158(2), 460–479 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  23. Demanet, L., Zhang, X.: Eventual linear convergence of the Douglas–Rachford iteration for basis pursuit. Math. Comput. (2015). doi:10.1090/mcom/2965

  24. Dontchev, A.L., Quincampoix, M., Zlateva, N.: Aubin criterion for metric regularity. J. Convex Anal. 13, 281–297 (2006)

    MathSciNet  MATH  Google Scholar 

  25. Dontchev, A.L., Rockafellar, R.T.: Implicit Functions and Solution Mappings: A View from Variational Analysis. Springer, New York (2009)

    Book  MATH  Google Scholar 

  26. Douglas, J., Rachford, H.H.: On the numerical solution of heat conduction problems in two and three space variables. Trans. Am. Math. Soc. 82(2), 421–439 (1956)

    Article  MathSciNet  MATH  Google Scholar 

  27. Eckstein, J., Bertsekas, D.P.: On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1–3), 293–318 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  28. Fortin, M., Glowinski, R.: Augmented Lagrangian Methods: Applications to the Numerical Solution of Boundary-Value Problems. Elsevier, Amsterdam (2000)

    MATH  Google Scholar 

  29. Gabay, D.: Chapter IX: applications of the method of multipliers to variational inequalities. Stud. Math. Appl. 15, 299–331 (1983)

    Google Scholar 

  30. Gabay, D., Mercier, B.: A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Comput. Math. Appl. 2(1), 17–40 (1976)

    Article  MATH  Google Scholar 

  31. Glowinski, R., Le Tallec, P.: Augmented Lagrangian and Operator-Splitting Methods in Nonlinear Mechanics, vol. 9. SIAM, Philadelphia (1989)

    Book  MATH  Google Scholar 

  32. He, B., Yuan, X.: On non-ergodic convergence rate of Douglas–Rachford alternating direction method of multipliers. Numer. Math. 130(3), 567–577 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  33. He, B., Yuan, X.: On convergence rate of the Douglas–Rachford operator splitting method. Math. Program. (2014). doi:10.1007/s10107-014-0805-x

  34. Hesse, R., Luke, D.R.: Nonconvex notions of regularity and convergence of fundamental algorithms for feasibility problems. SIAM J. Optim. 23(4), 2397–2419 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  35. Krasnosel’skii, M.A.: Two remarks on the method of successive approximations. Uspekhi Matematicheskikh Nauk 10(1), 123–127 (1955)

    MathSciNet  Google Scholar 

  36. Lemaire, B.: Stability of the iteration method for non expansive mappings. Serdica Math. J. 22(3), 331p–340p (1996)

    MathSciNet  Google Scholar 

  37. Lemaire, B.: Which fixed point does the iteration method select? In: Recent Advances in Optimization. Springer, pp. 154–167 (1997)

  38. Leventhal, D.: Metric subregularity and the proximal point method. J. Math. Anal. Appl. 360(2), 681–688 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  39. Lewis, A., Luke, D., Malick, J.: Local linear convergence for alternating and averaged nonconvex projections. Found. Comput. Math. 9(4), 485–513 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  40. Li, G., Mordukhovich, B.S.: Hölder metric subregularity with applications to proximal point method. SIAM J. Optim. 22(4), 1655–1684 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  41. Liang, J., Fadili, M.J., Peyré, G.: Local linear convergence of Forward–Backward under partial smoothness. In: Advances in Neural Information Processing Systems (NIPS), pp. 1970–1978 (2014)

  42. Liang, J., Fadili, M.J., Peyré, G., Luke, R.: Activity identification and local linear convergence of Douglas–Rachford/ADMM under partial smoothness. In: SSVM 2015—International Conference on Scale Space and Variational Methods in Computer Vision (2015)

  43. Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16(6), 964–979 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  44. Mann, W.R.: Mean value methods in iteration. Proc. Am. Math. Soc. 4(3), 506–510 (1953)

    Article  MathSciNet  MATH  Google Scholar 

  45. Martinet, B.: Brève communication. régularisation d’inéquations variationnelles par approximations successives. Revue française d’informatique et de rechercheopérationnelle, série rouge4(3), 154–158 (1970)

  46. Mercier, B.: Topics in finite element solution of elliptic problems. In: Lectures on Mathematics, vol. 63 (1979)

  47. Minty, G.J.: Monotone (nonlinear) operators in Hilbert space. Duke Math. J. 29(3), 341–346 (1962)

    Article  MathSciNet  MATH  Google Scholar 

  48. Monteiro, R.D.C., Svaiter, B.F.: On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean. SIAM J. Optim. 20(6), 2755–2787 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  49. Moreau, J.J.: Décomposition orthogonale d’un espace Hilbertien selon deux cônes mutuellement polaires. C.R. Acad. Sci. Paris 255, 238–240 (1962)

    MathSciNet  MATH  Google Scholar 

  50. Moreau, J.J.: Proximité et dualité dans un espace Hilbertien. Bull. Soc. Math. France 93, 273–299 (1965)

    MathSciNet  MATH  Google Scholar 

  51. Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course, vol. 87. Springer, New York (2004)

    MATH  Google Scholar 

  52. Ogura, N., Yamada, I.: Non-strictly convex minimization over the fixed point set of an asymptotically shrinking nonexpansive mapping. Numer. Funct. Anal. Optim. 23(1–2), 113–137 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  53. Opial, Z.: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 73(4), 591–597 (1967)

    Article  MathSciNet  MATH  Google Scholar 

  54. Passty, G.B.: Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  55. Peaceman, D.W., Rachford Jr, H.H.: The numerical solution of parabolic and elliptic differential equations. J. Soc. Ind. Appl. Math. 3(1), 28–41 (1955)

    Article  MathSciNet  MATH  Google Scholar 

  56. Phan, H.M.: Linear convergence of the Douglas–Rachford method for two closed sets. Technical Report arXiv:1401.6509v1 (2014)

  57. Polyak, B.T.: Introduction to Optimization. Optimization Software, New York (1987)

    MATH  Google Scholar 

  58. Raguet, H., Fadili, J., Peyré, G.: A generalized Forward–Backward splitting. SIAM J. Imaging Sci. 6(3), 1199–1226 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  59. Reich, S.: Weak convergence theorems for nonexpansive mappings in Banach spaces. J. Math. Anal. Appl. 67, 274–276 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  60. Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14(5), 877–898 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  61. Solodov, M.V.: A class of decomposition methods for convex optimization and monotone variational inclusions via the hybrid inexact proximal point framework. Optim. Methods Softw. 19(5), 557–575 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  62. Solodov, M.V., Svaiter, B.F.: A hybrid approximate extragradient-proximal point algorithm using the enlargement of a maximal monotone operator. Set-Valued Anal. 7(4), 323–345 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  63. Svaiter, B.F.: A class of Fejer convergent algorithms, approximate resolvents and the hybrid proximal-extragradient method. J. Optim. Theory Appl. 162(1), 133–153 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  64. Tseng, P.: Alternating projection-proximal methods for convex programming and variational inequalities. SIAM J. Optim. 7(4), 951–965 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  65. Vũ, B.C.: A splitting algorithm for dual monotone inclusions involving cocoercive operators. Adv. Comput. Math. 38(3), 667–681 (2013)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

This work has been partly supported by the European Research Council (ERC project SIGMA-Vision). J. Fadili is partly supported by Institut Universitaire de France. We would like to thank Yuchao Tang for pointing reference [52] to us.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jalal Fadili.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liang, J., Fadili, J. & Peyré, G. Convergence rates with inexact non-expansive operators. Math. Program. 159, 403–434 (2016). https://doi.org/10.1007/s10107-015-0964-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-015-0964-4

Keywords

Mathematics Subject Classification

Navigation