Skip to main content
Log in

Convergence analysis of iterative methods for nonsmooth convex optimization over fixed point sets of quasi-nonexpansive mappings

  • Full Length Paper
  • Series A
  • Published:
Mathematical Programming Submit manuscript

Abstract

This paper considers a networked system with a finite number of users and supposes that each user tries to minimize its own private objective function over its own private constraint set. It is assumed that each user’s constraint set can be expressed as a fixed point set of a certain quasi-nonexpansive mapping. This enables us to consider the case in which the projection onto the constraint set cannot be computed efficiently. This paper proposes two methods for solving the problem of minimizing the sum of their nondifferentiable, convex objective functions over the intersection of their fixed point sets of quasi-nonexpansive mappings in a real Hilbert space. One method is a parallel subgradient method that can be implemented under the assumption that each user can communicate with other users. The other is an incremental subgradient method that can be implemented under the assumption that each user can communicate with its neighbors. Investigation of the two methods’ convergence properties for a constant step size reveals that, with a small constant step size, they approximate a solution to the problem. Consideration of the case in which the step-size sequence is diminishing demonstrates that the sequence generated by each of the two methods strongly converges to the solution to the problem under certain assumptions. Convergence rate analysis of the two methods under certain situations is provided to illustrate the two methods’ efficiency. This paper also discusses nonsmooth convex optimization over sublevel sets of convex functions and provides numerical comparisons that demonstrate the effectiveness of the proposed methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. If Q is quasi-nonexpansive, \(\langle x - Q (x), x -y \rangle \ge (1/2)\Vert x - Q (x) \Vert ^2\) \((x\in H, y\in \mathrm {Fix}(Q))\). Hence, \(\langle x - Q_{\alpha } (x), x - y \rangle \ge ((1-\alpha )/2) \Vert x-Q(x) \Vert ^2\) \((x\in H, y\in \mathrm {Fix}(Q))\). We need to use the property in Proposition 2.1(iii) to prove Lemma 3.1. Accordingly, it is assumed that each user has a quasi-firmly nonexpansive mapping (see (A1)).

  2. When \(H=\mathbb {R}^N\), a convex function \(f^{(i)}\) satisfies the continuity condition [3, Corollary 8.31]. Therefore, (A2) can be replaced by the convexity condition of \(f^{(i)}\) with \(\mathrm {dom}(f^{(i)})=\mathbb {R}^N\).

  3. Under (A4), the strict convexity of f guarantees the uniqueness of the solution to Problem 2.1 [39, Corollary 25.15]. If there exists an operator who manages the system, it is reasonable to assume that the operator has a strongly convex objective function so as to guarantee the convergence of \((x_n)_{n\in \mathbb {N}}\) in Algorithm 3.1 to the desired solution, i.e., one that makes the system stable and reliable.

  4. Figure 6 shows the existence of a subsequence of \((x_n)_{n\in \mathbb {N}}\) generated by Algorithm 4.1 that converges to a solution to Problem 5.1 when all \(f^{(i)}\) are convex while Fig. 8 indicates the convergence of \((x_n)_{n\in \mathbb {N}}\) generated by Algorithm 4.1 to the solution to Problem 5.1 when only \(f^{(1)}\) is strongly convex.

  5. http://docs.scipy.org/doc/numpy/reference/routines.random.html.

  6. See Figs. 3, 4, 5 and 6 in Sect. 4 of the extended version of this work [16] for the results for \(\lambda _n := 10^{-5}, 10^{-3}/(n+1)^{0.1}\) \((n\in \mathbb {N})\).

References

  1. Bauschke, H.H., Chen, J.: A projection method for approximating fixed points of quasi nonexpansive mappings without the usual demiclosedness condition. J. Nonlinear Convex Anal. 15, 129–135 (2014)

    MathSciNet  MATH  Google Scholar 

  2. Bauschke, H.H., Combettes, P.L.: A weak-to-strong convergence principle for Fejér-monotone methods in Hilbert space. Math. Oper. Res. 26, 248–264 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  3. Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, New York (2011)

    Book  MATH  Google Scholar 

  4. Bertsekas, D.P., Nedić, A., Ozdaglar, A.E.: Convex Analysis and Optimization. Athena Scientific, Belmont (2003)

    MATH  Google Scholar 

  5. Blatt, D., Hero, A.O., Gauchman, H.: A convergent incremental gradient method with a constant step size. SIAM J. Optim. 18, 29–51 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  6. Combettes, P.L.: A block-iterative surrogate constraint splitting method for quadratic signal recovery. IEEE Trans. Signal Process. 51, 1771–1782 (2003)

    Article  MathSciNet  Google Scholar 

  7. Combettes, P.L.: Iterative construction of the resolvent of a sum of maximal monotone operators. J. Convex Anal. 16, 727–748 (2009)

    MathSciNet  MATH  Google Scholar 

  8. Combettes, P.L., Pesquet, J.C.: A Douglas–Rachford splitting approach to nonsmooth convex variational signal recovery. IEEE J. Sel. Top. Signal Process. 1, 564–574 (2007)

    Article  Google Scholar 

  9. Combettes, P.L., Pesquet, J.C.: A proximal decomposition method for solving convex variational inverse problems. Inverse Probl. 24, 065014 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  10. Combettes, P.L., Pesquet, J.C.: Proximal splitting methods in signal processing. In: Bauschke, H.H., Burachik, R.S., Combettes, P.L., Elser, V., Luke, D.R., Wolkowicz, H. (eds.) Fixed-Point Algorithms for Inverse Problems in Science and Engineering, pp. 185–212. Springer, New York (2011)

    Chapter  Google Scholar 

  11. Eckstein, J., Bertsekas, D.P.: On the Douglas–Rachfold splitting method and proximal point algorithm for maximal monotone operators. Math. Program. 55, 293–318 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  12. Goebel, K., Kirk, W.A.: Topics in Metric Fixed Point Theory. Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge (1990)

    Book  MATH  Google Scholar 

  13. Helou Neto, E.S., De Pierro, A.R.: Incremental subgradients for constrained convex optimization: a unified framework and new methods. SIAM J. Optim. 20, 1547–1572 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  14. Iiduka, H.: Fixed point optimization algorithms for distributed optimization in networked systems. SIAM J. Optim. 23, 1–26 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  15. Iiduka, H.: Acceleration method for convex optimization over the fixed point set of a nonexpansive mapping. Math. Program. 149, 131–165 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  16. Iiduka, H.: Convergence analysis of iterative methods for nonsmooth convex optimization over fixed point sets of quasi-nonexpansive mappings. arXiv:1510.06148 (2015)

  17. Iiduka, H., Hishinuma, K.: Acceleration method combining broadcast and incremental distributed optimization algorithms. SIAM J. Optim. 24, 1840–1863 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  18. Iiduka, H., Yamada, I.: A use of conjugate gradient direction for the convex optimization problem over the fixed point set of a nonexpansive mapping. SIAM J. Optim. 19, 1881–1893 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  19. Johansson, B., Rabi, M., Johansson, M.: A randomized incremental subgradient method for distributed optimization in networked systems. SIAM J. Optim. 20, 1157–1170 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  20. Kiwiel, K.C.: Convergence of approximate and incremental subgradient methods for convex optimization. SIAM J. Optim. 14, 807–840 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  21. Lee, S., Nedić, A.: Distributed random projection algorithm for convex optimization. IEEE J. Sel. Top. Signal Process. 7, 221–229 (2013)

    Article  Google Scholar 

  22. Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964–979 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  23. Lobel, I., Ozdaglar, A., Feijer, D.: Distributed multi-agent optimization with state-dependent communication. Math. Program. 129, 255–284 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  24. Maingé, P.E.: The viscosity approximation process for quasi-nonexpansive mappings in Hilbert spaces. Comput. Math. Appl. 59, 74–79 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  25. Nedić, A.: Random algorithms for convex minimization problems. Math. Program. 129, 225–253 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  26. Nedić, A., Bertsekas, D.P.: Incremental subgradient methods for nondifferentiable optimization. SIAM J. Optim. 12, 109–138 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  27. Nedić, A., Olshevsky, A., Ozdaglar, A., Tsitsiklis, J.N.: On distributed averaging algorithms and quantization effects. IEEE Trans. Autom. Control 54, 2506–2517 (2009)

    Article  MathSciNet  Google Scholar 

  28. Nedić, A., Ozdaglar, A.: Distributed subgradient methods for multi-agent optimization. IEEE Trans. Autom. Control 54, 48–61 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  29. Nedić, A., Ozdaglar, A.: Cooperative distributed multi-agent optimization. In: Convex Optimization in Signal Processing and Communications, pp. 340–386 (2010)

  30. Opial, Z.: Weak convergence of the sequence of successive approximation for nonexpansive mappings. Bull. Am. Math. Soc. 73, 591–597 (1967)

    Article  MathSciNet  MATH  Google Scholar 

  31. Pesquet, J.C., Pustelnik, N.: A parallel inertial proximal optimization method. Pac. J. Optim. 8, 273–306 (2012)

    MathSciNet  MATH  Google Scholar 

  32. Pesquet, J.C., Repetti, A.: A class of randomized primal-dual algorithms for distributed optimization. J. Nonlinear Convex Anal. (to appear)

  33. Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)

    Book  MATH  Google Scholar 

  34. Solodov, M.V., Zavriev, S.K.: Error stability properties of generalized gradient-type algorithms. J. Optim. Theory Appl. 98, 663–680 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  35. Vasin, V.V., Ageev, A.L.: Ill-Posed Problems with A Priori Information. V.S.P. Intl Science, Utrecht (1995)

    Book  MATH  Google Scholar 

  36. Wang, M., Bertsekas, D.P.: Incremental constraint projection-proximal methods for nonsmooth convex optimization. SIAM J. Optim. (to appear)

  37. Yamada, I.: The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings. In: Butnariu, D., Censor, Y., Reich, S. (eds.) Inherently Parallel Algorithms for Feasibility and Optimization and Their Applications, pp. 473–504. Elsevier, Amsterdam (2001)

    Chapter  Google Scholar 

  38. Yamada, I., Yukawa, M., Yamagishi, M.: Minimizing the Moreau envelope of nonsmooth convex functions over the fixed point set of certain quasi-nonexpansive mappings. In: Bauschke, H.H., Burachik, R.S., Combettes, P.L., Elser, V., Luke, D.R., Wolkowicz, H. (eds.) Fixed-Point Algorithms for Inverse Problems in Science and Engineering, pp. 345–390. Springer, New York (2011)

    Chapter  Google Scholar 

  39. Zeidler, E.: Nonlinear Functional Analysis ans Its Applications II/B. Nonlinear Monotone Operators. Springer, New York (1985)

    Book  Google Scholar 

  40. Zenios, S., Censor, Y.: Parallel Optimization: Theory, Algorithms, and Applications. Oxford University Press on Demand, New York (1998)

    MATH  Google Scholar 

Download references

Acknowledgments

I am sincerely grateful to the editor, Alexander Shapiro, the anonymous associate editor, and the two anonymous reviewers for helping me improve the original manuscript. I also thank Kazuhiro Hishinuma for his input on the numerical examples.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hideaki Iiduka.

Additional information

This work was supported by the Japan Society for the Promotion of Science through a Grant-in-Aid for Scientific Research (C) (15K04763).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Iiduka, H. Convergence analysis of iterative methods for nonsmooth convex optimization over fixed point sets of quasi-nonexpansive mappings. Math. Program. 159, 509–538 (2016). https://doi.org/10.1007/s10107-015-0967-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-015-0967-1

Keywords

Mathematics Subject Classification

Navigation