Abstract
This paper considers a networked system with a finite number of users and supposes that each user tries to minimize its own private objective function over its own private constraint set. It is assumed that each user’s constraint set can be expressed as a fixed point set of a certain quasi-nonexpansive mapping. This enables us to consider the case in which the projection onto the constraint set cannot be computed efficiently. This paper proposes two methods for solving the problem of minimizing the sum of their nondifferentiable, convex objective functions over the intersection of their fixed point sets of quasi-nonexpansive mappings in a real Hilbert space. One method is a parallel subgradient method that can be implemented under the assumption that each user can communicate with other users. The other is an incremental subgradient method that can be implemented under the assumption that each user can communicate with its neighbors. Investigation of the two methods’ convergence properties for a constant step size reveals that, with a small constant step size, they approximate a solution to the problem. Consideration of the case in which the step-size sequence is diminishing demonstrates that the sequence generated by each of the two methods strongly converges to the solution to the problem under certain assumptions. Convergence rate analysis of the two methods under certain situations is provided to illustrate the two methods’ efficiency. This paper also discusses nonsmooth convex optimization over sublevel sets of convex functions and provides numerical comparisons that demonstrate the effectiveness of the proposed methods.
Similar content being viewed by others
Notes
If Q is quasi-nonexpansive, \(\langle x - Q (x), x -y \rangle \ge (1/2)\Vert x - Q (x) \Vert ^2\) \((x\in H, y\in \mathrm {Fix}(Q))\). Hence, \(\langle x - Q_{\alpha } (x), x - y \rangle \ge ((1-\alpha )/2) \Vert x-Q(x) \Vert ^2\) \((x\in H, y\in \mathrm {Fix}(Q))\). We need to use the property in Proposition 2.1(iii) to prove Lemma 3.1. Accordingly, it is assumed that each user has a quasi-firmly nonexpansive mapping (see (A1)).
When \(H=\mathbb {R}^N\), a convex function \(f^{(i)}\) satisfies the continuity condition [3, Corollary 8.31]. Therefore, (A2) can be replaced by the convexity condition of \(f^{(i)}\) with \(\mathrm {dom}(f^{(i)})=\mathbb {R}^N\).
Under (A4), the strict convexity of f guarantees the uniqueness of the solution to Problem 2.1 [39, Corollary 25.15]. If there exists an operator who manages the system, it is reasonable to assume that the operator has a strongly convex objective function so as to guarantee the convergence of \((x_n)_{n\in \mathbb {N}}\) in Algorithm 3.1 to the desired solution, i.e., one that makes the system stable and reliable.
Figure 6 shows the existence of a subsequence of \((x_n)_{n\in \mathbb {N}}\) generated by Algorithm 4.1 that converges to a solution to Problem 5.1 when all \(f^{(i)}\) are convex while Fig. 8 indicates the convergence of \((x_n)_{n\in \mathbb {N}}\) generated by Algorithm 4.1 to the solution to Problem 5.1 when only \(f^{(1)}\) is strongly convex.
See Figs. 3, 4, 5 and 6 in Sect. 4 of the extended version of this work [16] for the results for \(\lambda _n := 10^{-5}, 10^{-3}/(n+1)^{0.1}\) \((n\in \mathbb {N})\).
References
Bauschke, H.H., Chen, J.: A projection method for approximating fixed points of quasi nonexpansive mappings without the usual demiclosedness condition. J. Nonlinear Convex Anal. 15, 129–135 (2014)
Bauschke, H.H., Combettes, P.L.: A weak-to-strong convergence principle for Fejér-monotone methods in Hilbert space. Math. Oper. Res. 26, 248–264 (2001)
Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, New York (2011)
Bertsekas, D.P., Nedić, A., Ozdaglar, A.E.: Convex Analysis and Optimization. Athena Scientific, Belmont (2003)
Blatt, D., Hero, A.O., Gauchman, H.: A convergent incremental gradient method with a constant step size. SIAM J. Optim. 18, 29–51 (2007)
Combettes, P.L.: A block-iterative surrogate constraint splitting method for quadratic signal recovery. IEEE Trans. Signal Process. 51, 1771–1782 (2003)
Combettes, P.L.: Iterative construction of the resolvent of a sum of maximal monotone operators. J. Convex Anal. 16, 727–748 (2009)
Combettes, P.L., Pesquet, J.C.: A Douglas–Rachford splitting approach to nonsmooth convex variational signal recovery. IEEE J. Sel. Top. Signal Process. 1, 564–574 (2007)
Combettes, P.L., Pesquet, J.C.: A proximal decomposition method for solving convex variational inverse problems. Inverse Probl. 24, 065014 (2008)
Combettes, P.L., Pesquet, J.C.: Proximal splitting methods in signal processing. In: Bauschke, H.H., Burachik, R.S., Combettes, P.L., Elser, V., Luke, D.R., Wolkowicz, H. (eds.) Fixed-Point Algorithms for Inverse Problems in Science and Engineering, pp. 185–212. Springer, New York (2011)
Eckstein, J., Bertsekas, D.P.: On the Douglas–Rachfold splitting method and proximal point algorithm for maximal monotone operators. Math. Program. 55, 293–318 (1992)
Goebel, K., Kirk, W.A.: Topics in Metric Fixed Point Theory. Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge (1990)
Helou Neto, E.S., De Pierro, A.R.: Incremental subgradients for constrained convex optimization: a unified framework and new methods. SIAM J. Optim. 20, 1547–1572 (2009)
Iiduka, H.: Fixed point optimization algorithms for distributed optimization in networked systems. SIAM J. Optim. 23, 1–26 (2013)
Iiduka, H.: Acceleration method for convex optimization over the fixed point set of a nonexpansive mapping. Math. Program. 149, 131–165 (2015)
Iiduka, H.: Convergence analysis of iterative methods for nonsmooth convex optimization over fixed point sets of quasi-nonexpansive mappings. arXiv:1510.06148 (2015)
Iiduka, H., Hishinuma, K.: Acceleration method combining broadcast and incremental distributed optimization algorithms. SIAM J. Optim. 24, 1840–1863 (2014)
Iiduka, H., Yamada, I.: A use of conjugate gradient direction for the convex optimization problem over the fixed point set of a nonexpansive mapping. SIAM J. Optim. 19, 1881–1893 (2009)
Johansson, B., Rabi, M., Johansson, M.: A randomized incremental subgradient method for distributed optimization in networked systems. SIAM J. Optim. 20, 1157–1170 (2009)
Kiwiel, K.C.: Convergence of approximate and incremental subgradient methods for convex optimization. SIAM J. Optim. 14, 807–840 (2004)
Lee, S., Nedić, A.: Distributed random projection algorithm for convex optimization. IEEE J. Sel. Top. Signal Process. 7, 221–229 (2013)
Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964–979 (1979)
Lobel, I., Ozdaglar, A., Feijer, D.: Distributed multi-agent optimization with state-dependent communication. Math. Program. 129, 255–284 (2011)
Maingé, P.E.: The viscosity approximation process for quasi-nonexpansive mappings in Hilbert spaces. Comput. Math. Appl. 59, 74–79 (2010)
Nedić, A.: Random algorithms for convex minimization problems. Math. Program. 129, 225–253 (2011)
Nedić, A., Bertsekas, D.P.: Incremental subgradient methods for nondifferentiable optimization. SIAM J. Optim. 12, 109–138 (2001)
Nedić, A., Olshevsky, A., Ozdaglar, A., Tsitsiklis, J.N.: On distributed averaging algorithms and quantization effects. IEEE Trans. Autom. Control 54, 2506–2517 (2009)
Nedić, A., Ozdaglar, A.: Distributed subgradient methods for multi-agent optimization. IEEE Trans. Autom. Control 54, 48–61 (2009)
Nedić, A., Ozdaglar, A.: Cooperative distributed multi-agent optimization. In: Convex Optimization in Signal Processing and Communications, pp. 340–386 (2010)
Opial, Z.: Weak convergence of the sequence of successive approximation for nonexpansive mappings. Bull. Am. Math. Soc. 73, 591–597 (1967)
Pesquet, J.C., Pustelnik, N.: A parallel inertial proximal optimization method. Pac. J. Optim. 8, 273–306 (2012)
Pesquet, J.C., Repetti, A.: A class of randomized primal-dual algorithms for distributed optimization. J. Nonlinear Convex Anal. (to appear)
Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)
Solodov, M.V., Zavriev, S.K.: Error stability properties of generalized gradient-type algorithms. J. Optim. Theory Appl. 98, 663–680 (1998)
Vasin, V.V., Ageev, A.L.: Ill-Posed Problems with A Priori Information. V.S.P. Intl Science, Utrecht (1995)
Wang, M., Bertsekas, D.P.: Incremental constraint projection-proximal methods for nonsmooth convex optimization. SIAM J. Optim. (to appear)
Yamada, I.: The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings. In: Butnariu, D., Censor, Y., Reich, S. (eds.) Inherently Parallel Algorithms for Feasibility and Optimization and Their Applications, pp. 473–504. Elsevier, Amsterdam (2001)
Yamada, I., Yukawa, M., Yamagishi, M.: Minimizing the Moreau envelope of nonsmooth convex functions over the fixed point set of certain quasi-nonexpansive mappings. In: Bauschke, H.H., Burachik, R.S., Combettes, P.L., Elser, V., Luke, D.R., Wolkowicz, H. (eds.) Fixed-Point Algorithms for Inverse Problems in Science and Engineering, pp. 345–390. Springer, New York (2011)
Zeidler, E.: Nonlinear Functional Analysis ans Its Applications II/B. Nonlinear Monotone Operators. Springer, New York (1985)
Zenios, S., Censor, Y.: Parallel Optimization: Theory, Algorithms, and Applications. Oxford University Press on Demand, New York (1998)
Acknowledgments
I am sincerely grateful to the editor, Alexander Shapiro, the anonymous associate editor, and the two anonymous reviewers for helping me improve the original manuscript. I also thank Kazuhiro Hishinuma for his input on the numerical examples.
Author information
Authors and Affiliations
Corresponding author
Additional information
This work was supported by the Japan Society for the Promotion of Science through a Grant-in-Aid for Scientific Research (C) (15K04763).
Rights and permissions
About this article
Cite this article
Iiduka, H. Convergence analysis of iterative methods for nonsmooth convex optimization over fixed point sets of quasi-nonexpansive mappings. Math. Program. 159, 509–538 (2016). https://doi.org/10.1007/s10107-015-0967-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10107-015-0967-1
Keywords
- Fixed point
- Incremental subgradient method
- Nonsmooth convex optimization
- Parallel subgradient method
- Quasi-nonexpansive mapping