## Abstract

The existing algorithms for solving the convex minimization problem over the fixed point set of a nonexpansive mapping on a Hilbert space are based on algorithmic methods, such as the steepest descent method and conjugate gradient methods, for finding a minimizer of the objective function over the whole space, and attach importance to minimizing the objective function as quickly as possible. Meanwhile, it is of practical importance to devise algorithms which converge in the fixed point set quickly because the fixed point set is the set with the constraint conditions that must be satisfied in the problem. This paper proposes an algorithm which not only minimizes the objective function quickly but also converges in the fixed point set much faster than the existing algorithms and proves that the algorithm with diminishing step-size sequences strongly converges to the solution to the convex minimization problem. We also analyze the proposed algorithm with each of the Fletcher–Reeves, Polak–Ribiére–Polyak, Hestenes–Stiefel, and Dai–Yuan formulas used in the conventional conjugate gradient methods, and show that there is an inconvenient possibility that their algorithms may not converge to the solution to the convex minimization problem. We numerically compare the proposed algorithm with the existing algorithms and show its effectiveness and fast convergence.

This is a preview of subscription content, access via your institution.

## Notes

\((d_n^f)_{n\in \mathbb {N}}\) is referred to as a descent search direction if \(\langle d_n^f, {\nabla }\! f (x_n) \rangle < 0\) for all \(n\in \mathbb {N}\).

These are defined as follows: \(\delta _n^{\mathrm {FR}}:=\Vert {\nabla }\! f (x_{n+1})\Vert ^2 /\Vert {\nabla }\! f (x_n)\Vert ^2, \delta _n^{\mathrm {PRP}}:= v_n/\Vert {\nabla }\! f (x_n)\Vert ^2, \delta _n^{\mathrm {HS}}:= v_n / u_n, \delta _n^{\mathrm {DY}}:=\Vert \nabla f (x_{n+1})\Vert ^2 /u_n\), where \(u_n := \langle d_n^f, {\nabla }\! f (x_{n+1}) - {\nabla }\! f (x_n) \rangle \) and \(v_n:= \langle {\nabla }\! f (x_{n+1}), {\nabla }\! f (x_{n+1}) - {\nabla }\! f (x_n) \rangle \).

For example, when there is a bound on \(\mathrm {Fix}(N)\), we can choose \(K\) as a closed ball with a large radius containing \(\mathrm {Fix}(N)\). The metric projection onto such a \(K\) is easily computed (see also Sect. 2.1). See the final paragraph in Sect. 3.1 for a discussion of Problem 3.1 when a bound on \(\mathrm {Fix}(N)\) either does not exist or is not known.

The conjugate gradient method with the DY formula (i.e., \(\delta _n^{(1)}:= \delta _n^{\mathrm {DY}}\)) generates the descent search direction under the Wolfe conditions [29]. Whether or not the conjugate gradient methods generate descent search directions depends on the choices of \(\delta _n^{(1)}\) and \(\alpha _n\).

Reference [10], Sect. 2.1] showed that \(x_{n+1}:= x_n + \alpha _n d_n^{f}\) and \(d_{n+1}^{f}:= - {\nabla }\! f (x_{n+1}) + \delta _n^{(1)} d_n^{f} - \delta _n^{(2)} z_n\), where \(\alpha _n, \delta _n^{(1)} (>0)\) are arbitrary, \(z_n (\in \mathbb {R}^N)\) is any vector, and \(\delta _n^{(2)}:= \delta _n^{(1)} (\langle \nabla f(x_{n+1}), d_n \rangle / \langle {\nabla }\! f(x_{n+1}), z_n \rangle )\), satisfy \(\langle d_n^f, {\nabla }\! f(x_n) \rangle = -\Vert {\nabla }\! f(x_n)\Vert ^2 (n\in \mathbb {N})\).

Given a halfspace \(S:= \{ x\in H :\langle a,x\rangle \le b \}\), where \(a (\ne 0) \in H\) and \(b\in \mathbb {R}, N (x):= P_{S} (x) = x - [\max \{ 0, \langle a,x \rangle -b \} /\Vert a\Vert ^2] a (x\in H)\) is nonexpansive with \(\mathrm {Fix}(N) = \mathrm {Fix}(P_{S}) = S \ne \emptyset \) [18, p. 406], [17], Chap. 28.3]. However, we cannot define a bounded \(K\) satisfying \(\mathrm {Fix}(N) = S \subset K\).

Suppose that \((x_n)_{n\in \mathbb {N}} (\subset H)\) weakly converges to \(\hat{x} \in H\) and \(\bar{x} \ne \hat{x}\). Then, the following condition, called Opial’s condition [30], is satisfied: \(\liminf _{n\rightarrow \infty }\Vert x_n - \hat{x}\Vert < \liminf _{n\rightarrow \infty }\Vert x_n - \bar{x}\Vert \). In the above situation, Opial’s condition leads to \(\liminf _{i \rightarrow \infty }\Vert x_{n_i} - x^*\Vert < \liminf _{i \rightarrow \infty }\Vert x_{n_i} - \hat{N} (x^*)\Vert \).

We randomly chose \(\lambda _Q^k \!\in \! (1, S) (k\!=\!2,3,\ldots , S\!-\!1)\) and set \(\hat{Q} \!\in \! \mathbb {R}^{S \times S}\) as a diagonal matrix with eigenvalues, \(\lambda _Q^1, \lambda _Q^2, \ldots , \lambda _Q^S\). We made a positive definite matrix, \(Q \!\in \! \mathbb {R}^{S \times S}\), using an orthogonal matrix and \(\hat{Q}\).

\(x\in \mathbb {R}^S\) satisfies \(\Vert x - N(x)\Vert = 0\) if and only if \(x\in \mathrm {Fix}(N)\).

See Remark 3.2 on the nonmonotonicity of \((\Vert x_n - N(x_n)\Vert )_{n\in \mathbb {N}}\) in Algorithm 3.1.

## References

Yamada, I.: The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings. In: Butnariu, D., Censor, Y., Reich, S. (eds.) Inherently Parallel Algorithms for Feasibility and Optimization and Their Applications, pp. 473–504. Elsevier, Amsterdam (2001)

Combettes, P.L.: A block-iterative surrogate constraint splitting method for quadratic signal recovery. IEEE Trans. Signal Process.

**51**, 1771–1782 (2003)Slavakis, K., Yamada, I.: Robust wideband beamforming by the hybrid steepest descent method. IEEE Trans. Signal Process.

**55**, 4511–4522 (2007)Iiduka, H.: Iterative algorithm for triple-hierarchical constrained nonconvex optimization problem and its application to network bandwidth allocation. SIAM J. Optim.

**22**, 862–878 (2012)Iiduka, H., Uchida, M.: Fixed point optimization algorithms for network bandwidth allocation problems with compoundable constraints. IEEE Commun. Lett.

**15**, 596–598 (2011)Combettes, P.L., Bondon, P.: Hard-constrained inconsistent signal feasibility problems. IEEE Trans. Signal Process.

**47**, 2460–2468 (1999)Yamada, I., Ogura, N., Shirakawa, N.: A numerical robust hybrid steepest descent method for the convexly constrained generalized inverse problems. Contemp. Math.

**313**, 269–305 (2002)Nocedal, J., Wright, S.J.: Numerical Optimization. Springer, Springer Series in Operations Research and Financial Engineering, Berlin (1999)

Cheng, W.: A two-term PRP-based descent method. Numer. Funct. Anal. Optim.

**28**, 1217–1230 (2007)Narushima, Y., Yabe, H., Ford, J.A.: A three-term conjugate gradient method with sufficient descent property for unconstrained optimization. SIAM J. Optim.

**21**, 212–230 (2011)Zhang, L., Zhou, W., Li, D.H.: A descent modified Polak–Ribiére–Polyak conjugate gradient method and its global convergence. IMA J. Numer. Anal.

**26**, 629–640 (2006)Zhang, L., Zhou, W., Li, D.H.: Global convergence of a modified Fletcher–Reeves conjugate gradient method with Armijo-type line search. Numer. Math.

**104**, 561–572 (2006)Zhang, L., Zhou, W., Li, D.H.: Some descent three-term conjugate gradient methods and their global convergence. Optim. Methods Softw.

**22**, 697–711 (2007)Iiduka, H., Yamada, I.: A use of conjugate gradient direction for the convex optimization problem over the fixed point set of a nonexpansive mapping. SIAM J. Optim.

**19**, 1881–1893 (2009)Iiduka, H.: Three-term conjugate gradient method for the convex optimization problem over the fixed point set of a nonexpansive mapping. Appl. Math. Comput.

**217**, 6315–6327 (2011)Iiduka, H.: Iterative algorithm for solving triple-hierarchical constrained optimization problem. J. Optim. Theory Appl.

**148**, 580–592 (2011)Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, Berlin (2011)

Bauschke, H.H., Borwein, J.M.: On projection algorithms for solving convex feasibility problems. SIAM Rev.

**38**, 367–426 (1996)Goebel, K., Kirk, W.A.: Topics in Metric Fixed Point Theory. Cambridge University Press, Cambridge Studies in Advanced Mathematics, Cambridge (1990)

Goebel, K., Reich, S.: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Dekker, New York (1984)

Takahashi, W.: Nonlinear Functional Analysis. Yokohama Publishers, Japan (2000)

Stark, H., Yang, Y.: Vector Space Projections: A Numerical Approach to Signal and Image Processing. Wiley, London (1998)

Wolfe, P.: Finding the nearest point in a polytope. Math. Program.

**11**, 128–149 (1976)Aoyama, K., Kimura, Y., Takahashi, W., Toyoda, M.: On a strongly nonexpansive sequence in Hilbert spaces. J. Nonlinear Convex Anal.

**8**, 471–489 (2007)Ekeland, I., Témam, R.: Convex Analysis and Variational Problems. Classics Appl. Math., vol. 28. SIAM, Philadelphia (1999)

Kinderlehrer, D., Stampacchia, G.: An Introduction to Variational Inequalities and Their Applications. Classics Appl. Math., vol. 31. SIAM, Philadelphia (2000)

Borwein, J.M., Lewis, A.S.: Convex Analysis and Nonlinear Optimization: Theory and Examples. Springer, Berlin (2000)

Zeidler, E.: Nonlinear Functional Analysis ans Its Applications III. Variational Methods and Optimization. Springer, Berlin (1985)

Dai, Y.H., Yuan, Y.: A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim.

**10**, 177–182 (1999)Opial, Z.: Weak convergence of the sequence of successive approximation for nonexpansive mappings. Bull. Am. Math. Soc.

**73**, 591–597 (1967)Bakushinsky, A., Goncharsky, A.: Ill-Posed Problems: Theory and Applications. Kluwer, Dordrecht (1994)

Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci.

**2**, 183–202 (2009)Nesterov, Y.E.: A method for solving the convex programming problem with convergence rate \(o(1/k^2)\). Dokl. Akad. Nauk SSSR

**269**, 543–547 (1983)Iiduka, H.: Fixed point optimization algorithms for distributed optimization in networked systems. SIAM J. Optim.

**23**, 1–26 (2013)Iiduka, H.: Fixed point optimization algorithm and its application to power control in CDMA data networks. Math. Program.

**133**, 227–242 (2012)Iiduka, H., Yamada, I.: Computational method for solving a stochastic linear-quadratic control problem given an unsolvable stochastic algebraic Riccati equation. SIAM J. Control Optim.

**50**, 2173–2192 (2012)

## Acknowledgments

I wrote Sect. 3.2 by referring to the referee’s report on the original manuscript of [14]. I am sincerely grateful to the anonymous referee that reviewed the original manuscript of [14] for helping me compile the paper. I also would like to thank the Co-Editor, Michael C. Ferris, and the two anonymous reviewers for helping me improve the original manuscript.

## Author information

### Authors and Affiliations

### Corresponding author

## Additional information

This work was supported by the Japan Society for the Promotion of Science through a Grant-in-Aid for Young Scientists (B) (23760077), and in part by the Japan Society for the Promotion of Science through a Grant-in-Aid for Scientific Research (C) (22540175).

## Rights and permissions

## About this article

### Cite this article

Iiduka, H. Acceleration method for convex optimization over the fixed point set of a nonexpansive mapping.
*Math. Program.* **149, **131–165 (2015). https://doi.org/10.1007/s10107-013-0741-1

Received:

Accepted:

Published:

Issue Date:

DOI: https://doi.org/10.1007/s10107-013-0741-1

### Keywords

- Convex optimization
- Fixed point set
- Nonexpansive mapping
- Conjugate gradient method
- Three-term conjugate gradient method
- Fixed point optimization algorithm

### Mathematics Subject Classification (2010)

- 47H07
- 47H09
- 65K05
- 65K10
- 90C25
- 90C30
- 90C52